{ "runs":[ { "tool":{ "driver":{ "name":"torch.onnx.dynamo_export", "contents":[ "localizedData", "nonLocalizedData" ], "language":"en-US", "rules":[ { "id":"FXE0016", "fullDescription":{ "text":"This rule involves finding the list of OnnxFunction for the PyTorch operator overload in the ONNX registry. If the operator overload is not supported but its default overload is, a warning will be issued. If both the operator overload and its default overload are not supported, an error will be issued.", "markdown":"The operator overload name serves the purpose of verifying whether a PyTorch operator is registered in the ONNX registry.\nIf it's not found, the dispatcher takes a fallback approach and tries to locate the default overload of the PyTorch\noperator in the registry. If even the default overload is absent, it signifies that the operator is officially unsupported.\n\nThere are three types of level that can be triggered in this rule:\n\n1. NOTE: The op overload is supported.\n2. WARNING: The op overload is not supported, but it's default overload is supported.\n3. ERROR: The op overload is not supported, and it's default overload is also not supported.\n\nHere are some suggestions based on the WARNING situation:\n\n1. If there are NO errors or mismatches in the results, it is safe to disregard this warning.\n2. If there are errors or mismatches in the results, it is recommended to:\n (a) Enable op_level_debugging to determine if the OnnxFunction might be incorrect.\n (b) Report the unsupported overload to the PyTorch-ONNX team.\n (c) Create/register a custom symbolic function to replace the default one.\n\nHere are some suggestions based on the ERROR situation:\n\n1. Report the unsupported operator to the PyTorch-ONNX team.\n2. Create/register a custom symbolic function to replace the default one.\n" }, "name":"find-operator-overloads-in-onnx-registry", "shortDescription":{ "text":"Find the list of OnnxFunction of the PyTorch operator in onnx registry." } }, { "id":"FXE0007", "fullDescription":{ "text":"Transforms graph from FX IR to ONNX IR.", "markdown":"This diagnostic tracks the transformation process from an FX Graph (in FX IR) to an ONNX Graph (in ONNX IR).\n\n## Key Representations:\n\n- **FX Graph**: The graph in FX IR produced by dynamo or symbolic tracing.\n- **ONNX Graph**: The graph in ONNX IR and [operators](https://onnx.ai/onnx/operators/).\n\n## Additional Notes:\n\n- Prior to this transformation step, the FX graph undergoes preprocessing through multiple FX passes.\n To gain insight into these transformations, refer to diagnostic `FXE0010`.\n- To enable a detailed view of the graph transformation in progress within this diagnostic, switch to the DEBUG mode.\n\n - Set DiagnosticOptions.verbosity_level to logging.DEBUG.\n - Activate the environment variable TORCH_LOGS='onnx_diagnostics'.\n\n- For specific information related to node-level FX to ONNX transformations, explore the diagnostic `FXE0008`.\n" }, "name":"fx-graph-to-onnx", "shortDescription":{ "text":"Transforms graph from FX IR to ONNX IR." } }, { "id":"FXE0015", "fullDescription":{ "text":"Determine if type promotion is required for the FX node. Insert cast nodes if needed.", "markdown":"This diagnostic monitors the node-level type promotion insertion process. In PyTorch, there is an automatic process called implicit type promotion,\nwhere the input types of an operator are promoted to a common type. The determination of the common type is based on the type promotion rule specific to each operator.\nTo learn more about PyTorch's type promotion rules, refer to the [elementwise_dtypes doc](https://github.com/pytorch/pytorch/blob/f044613f78df713fb57f70c608483c9f10ad332e/torch/_prims_common/__init__.py#L1252-L1335)\nand [torch._refs ops](https://github.com/pytorch/pytorch/blob/a475ea4542dfe961c9d097e33ab5041f61c8c17f/torch/_refs/__init__.py#L484).\n\nHowever, implicit type promotion is not supported in ONNX. Therefore, to replicate the PyTorch behavior, we need to explicitly insert cast nodes.\nThis diagnostic tracks the process of node-level type promotion insertion.\n\nThe type promotion rules used by this process can be found in `torch/onnx/_internal/fx/passes/type_promotion.py.`\nTo update or add new type promotion rules, please refer to the [Note: Update type promotion rule] section.\n" }, "name":"fx-node-insert-type-promotion", "shortDescription":{ "text":"Determine if type promotion is required for the FX node. Insert cast nodes if needed." } }, { "id":"FXE0008", "fullDescription":{ "text":"Transforms an FX node to an ONNX node.", "markdown":"This diagnostic tracks the transformation process from an FX Node to ONNX [Operators](https://onnx.ai/onnx/operators/).\n\nThe process of converting FX Node to ONNX Node involves dealing with six distinct node types:\n 1. `placeholder`: Represents a module input, maps to an ONNX graph input.\n 2. `call_module`: Symbolizes a call to a submodule, maps to an ONNX\n 3. `call_method`: Symbolizes a method call. Not yet implemented.\n 4. `call_function`: Symbolizes a function call. [Core ATen](https://pytorch.org/docs/stable/ir.html#core-aten-ir) is expected\n as the function call target. The mapping from ATen to ONNX is implemented by [ONNXScript torchlib](https://github.com/microsoft/onnxscript/tree/main/onnxscript/function_libs/torch_lib/ops).\n This [guide](https://pytorch.org/docs/stable/onnx.html#onnx-script-functions) shows how to write and register a custom symbolic function for call_function FX node.\n 5. `get_attr`: Indicates an attribute access within the current module. Maps to an ONNX graph initializer.\n 6. `output`: Represents the module's output. Maps to an ONNX graph output.\n\nFor a granular understanding of how each node type is transformed, refer to the implementation details in `FxOnnxInterpreter`.\n" }, "name":"fx-node-to-onnx", "shortDescription":{ "text":"Transforms an FX node to an ONNX node." } }, { "id":"FXE0014", "fullDescription":{ "text":"Find the OnnxFunction that matches the input dtypes by comparing them with their opschemas. A warning will be issued if the matched OnnxFunction is not an exact match.", "markdown":"When an ATen/Custom operator is registered and needs to be dispatched to an OnnxFunction, the input/attribute\ndtypes of the ATen/Custom operator are compared with the input/attribute dtypes of the OnnxFunction opschemas\nto find a match. However, if a perfect/exact match is not found, the dispatcher will attempt to find\nthe nearest match with the highest number of input/attribute dtypes matching the OnnxFunction opschemas, while\nissuing a warning.\n\nThere are two types of level that can be triggered in this rule:\n\n1. NOTE: A perfect match is found, and no warning is issued.\n2. WARNING: The matched OnnxFunction is not a perfect/exact match.\n\nHere are some suggestions based on the WARNING situation:\n\n1. If there are NO errors or mismatches in the results, it is safe to disregard this warning,\n as the definition of OnnxFunction schema is usually more stringent.\n2. If there are errors or mismatches in the results, it is recommended to:\n (a) Enable op_level_debugging to determine if the OnnxFunction might be incorrect.\n (b) Report the issue to the PyTorch-ONNX team.\n (c) Create/register a custom symbolic function to replace the default one.\n" }, "name":"find-opschema-matched-symbolic-function", "shortDescription":{ "text":"Find the OnnxFunction that matches the input/attribute dtypes by comparing them with their opschemas." } }, { "id":"FXE0010", "fullDescription":{ "text":"FX graph transformation during ONNX export before converting from FX IR to ONNX IR.", "markdown":"This diagnostic tracks the FX passes executed during the ONNX export process prior\nto converting from FX IR (Intermediate Representation) to ONNX IR.\n\nUnder the scope of ONNX export, an FX pass refers to a specific transformation applied to the FX GraphModule.\nThe primary aim of these passes is to streamline the graph into a format that aligns more with the ONNX IR.\nMoreover, these passes work to substitute unsupported FX IR features with those recognized and endorsed by\nONNX IR. Common transformations include, but aren't limited to, decomposition, functionalization and\ntype promotion.\n\nFor those who are interested in a comprehensive log detailing the modifications made during these passes,\nthere are a couple of options:\n\n- Set DiagnosticOptions.verbosity_level to logging.DEBUG.\n- Activate the environment variable TORCH_LOGS='onnx_diagnostics'.\n\nHowever, it's noteworthy that by default, such detailed logging is turned off. The primary reason being\nits considerable impact on performance.\n\nFor an in-depth understanding of each specific pass, please refer to the directory: torch/onnx/_internal/fx/passes.\n" }, "name":"fx-pass", "shortDescription":{ "text":"FX graph transformation during ONNX export before converting from FX IR to ONNX IR." } } ], "version":"2.5.0a0+872d972e41.nv24.08" } }, "language":"en-US", "newlineSequences":[ "\r\n", "\n" ], "results":[ { "message":{ "markdown":"Skipped p_trunk_pos_embed: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_pos_embed)[placeholder]:Tensor(f32[1, 1024, 1024])\n## Return values\nTensor(f32[1, 1024, 1024])", "text":"Skipped p_trunk_pos_embed: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_latent: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_latent)[placeholder]:Tensor(f32[1, 1, 1024])\n## Return values\nTensor(f32[1, 1, 1024])", "text":"Skipped p_trunk_attn_pool_latent: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_patch_embed_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_weight)[placeholder]:Tensor(f32[1024, 3, 16, 16])\n## Return values\nTensor(f32[1024, 3, 16, 16])", "text":"Skipped p_trunk_patch_embed_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_patch_embed_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_patch_embed_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___norm1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___norm1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___norm1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___norm1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___attn_qkv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n## Return values\nTensor(f32[3072, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___attn_qkv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___attn_qkv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n## Return values\nTensor(f32[3072])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___attn_qkv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___attn_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___attn_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___attn_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___attn_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___norm2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___norm2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___norm2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___norm2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_norm_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_norm_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_norm_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_norm_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_norm_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_norm_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_q_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_q_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_trunk_attn_pool_q_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_q_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_q_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_attn_pool_q_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_kv_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_kv_weight)[placeholder]:Tensor(f32[2048, 1024])\n## Return values\nTensor(f32[2048, 1024])", "text":"Skipped p_trunk_attn_pool_kv_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_kv_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_kv_bias)[placeholder]:Tensor(f32[2048])\n## Return values\nTensor(f32[2048])", "text":"Skipped p_trunk_attn_pool_kv_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_proj_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped p_trunk_attn_pool_proj_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_proj_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_proj_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_attn_pool_proj_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_norm_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_norm_weight)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_attn_pool_norm_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_norm_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_norm_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_attn_pool_norm_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_mlp_fc1_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped p_trunk_attn_pool_mlp_fc1_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_mlp_fc1_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n## Return values\nTensor(f32[4096])", "text":"Skipped p_trunk_attn_pool_mlp_fc1_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_mlp_fc2_weight: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped p_trunk_attn_pool_mlp_fc2_weight: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped p_trunk_attn_pool_mlp_fc2_bias: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n## Return values\nTensor(f32[1024])", "text":"Skipped p_trunk_attn_pool_mlp_fc2_bias: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped x: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(x)[placeholder]:Tensor(f32[4, 3, 512, 512])\n## Return values\nTensor(f32[4, 3, 512, 512])", "text":"Skipped x: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.convolution.default)[call_function]:Tensor(f32[4, 1024, 32, 32]): Cannot find type promotion rule for op: aten.convolution.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.convolution.default)[call_function]:Tensor(f32[4, 1024, 32, 32])\n## Return values\nTensor(f32[4, 1024, 32, 32])", "text":"Skipped for fx.Node(aten.convolution.default)[call_function]:Tensor(f32[4, 1024, 32, 32]): Cannot find type promotion rule for op: aten.convolution.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument transpose is not promoted. Already torch.float32.\nArgument p_trunk_pos_embed is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_1. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument clone is not promoted. Already torch.float32.\nArgument clone_1 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_1. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_8 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_2. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_1 is not promoted. Already torch.float32.\nArgument clone_3 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_2. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_3. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_2 is not promoted. Already torch.float32.\nArgument clone_4 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_3. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_1. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_18 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_1. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_4. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_3 is not promoted. Already torch.float32.\nArgument clone_6 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_4. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_5. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_4 is not promoted. Already torch.float32.\nArgument clone_7 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_5. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_2. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_28 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_2. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_6. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_5 is not promoted. Already torch.float32.\nArgument clone_9 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_6. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_7. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_6 is not promoted. Already torch.float32.\nArgument clone_10 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_7. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_3. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_38 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_3. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_8. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_7 is not promoted. Already torch.float32.\nArgument clone_12 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_8. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_9. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_8 is not promoted. Already torch.float32.\nArgument clone_13 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_9. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_4. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_48 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_4. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_10. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_9 is not promoted. Already torch.float32.\nArgument clone_15 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_10. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_11. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_10 is not promoted. Already torch.float32.\nArgument clone_16 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_11. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_5. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_58 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_5. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_12. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_11 is not promoted. Already torch.float32.\nArgument clone_18 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_12. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_13. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_12 is not promoted. Already torch.float32.\nArgument clone_19 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_13. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_6. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_68 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_6. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_14. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_13 is not promoted. Already torch.float32.\nArgument clone_21 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_14. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_15. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_14 is not promoted. Already torch.float32.\nArgument clone_22 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_15. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_7. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_78 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_7. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_16. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_15 is not promoted. Already torch.float32.\nArgument clone_24 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_16. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_17. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_16 is not promoted. Already torch.float32.\nArgument clone_25 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_17. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_8. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_88 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_8. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_18. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_17 is not promoted. Already torch.float32.\nArgument clone_27 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_18. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_19. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_18 is not promoted. Already torch.float32.\nArgument clone_28 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_19. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_9. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_98 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_9. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_20. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_19 is not promoted. Already torch.float32.\nArgument clone_30 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_20. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_21. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_20 is not promoted. Already torch.float32.\nArgument clone_31 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_21. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_10. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_108 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_10. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_22. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_21 is not promoted. Already torch.float32.\nArgument clone_33 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_22. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_23. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_22 is not promoted. Already torch.float32.\nArgument clone_34 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_23. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_11. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_118 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_11. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_24. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_23 is not promoted. Already torch.float32.\nArgument clone_36 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_24. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_25. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_24 is not promoted. Already torch.float32.\nArgument clone_37 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_25. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_12. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_128 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_12. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_26. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_25 is not promoted. Already torch.float32.\nArgument clone_39 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_26. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_27. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_26 is not promoted. Already torch.float32.\nArgument clone_40 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_27. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_13. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_138 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_13. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_28. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_27 is not promoted. Already torch.float32.\nArgument clone_42 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_28. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_29. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_28 is not promoted. Already torch.float32.\nArgument clone_43 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_29. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_14. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_148 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_14. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_30. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_29 is not promoted. Already torch.float32.\nArgument clone_45 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_30. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_31. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_30 is not promoted. Already torch.float32.\nArgument clone_46 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_31. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_15. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_158 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_15. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_32. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_31 is not promoted. Already torch.float32.\nArgument clone_48 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_32. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_33. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_32 is not promoted. Already torch.float32.\nArgument clone_49 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_33. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_16. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_168 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_16. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_34. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_33 is not promoted. Already torch.float32.\nArgument clone_51 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_34. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_35. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_34 is not promoted. Already torch.float32.\nArgument clone_52 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_35. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_17. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_178 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_17. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_36. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_35 is not promoted. Already torch.float32.\nArgument clone_54 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_36. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_37. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_36 is not promoted. Already torch.float32.\nArgument clone_55 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_37. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_18. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_188 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_18. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_38. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_37 is not promoted. Already torch.float32.\nArgument clone_57 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_38. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_39. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_38 is not promoted. Already torch.float32.\nArgument clone_58 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_39. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_19. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_198 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_19. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_40. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_39 is not promoted. Already torch.float32.\nArgument clone_60 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_40. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_41. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_40 is not promoted. Already torch.float32.\nArgument clone_61 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_41. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_20. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_208 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_20. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_42. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_41 is not promoted. Already torch.float32.\nArgument clone_63 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_42. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_43. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_42 is not promoted. Already torch.float32.\nArgument clone_64 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_43. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_21. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_218 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_21. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_44. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_43 is not promoted. Already torch.float32.\nArgument clone_66 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_44. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_45. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_44 is not promoted. Already torch.float32.\nArgument clone_67 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_45. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_22. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_228 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_22. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_46. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_45 is not promoted. Already torch.float32.\nArgument clone_69 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_46. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n## Return values\nTensor(f32[1024, 3072])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n## Return values\nTensor(f32[4096, 3072])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n## Return values\nTensor(f32[4, 1024, 3072])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n## Return values\nTensor(f32[4, 1024, 3, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n## Return values\nTensor(f32[3, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n## Return values\nTensor(f32[4, 1024, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_47. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_46 is not promoted. Already torch.float32.\nArgument clone_70 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_47. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_23. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_238 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Type promotion not needed for gelu_23. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096])\n## Return values\nTensor(f32[4, 1024, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096])\n## Return values\nTensor(f32[4096, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_48. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument add_47 is not promoted. Already torch.float32.\nArgument clone_72 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Type promotion not needed for add_48. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n## Return values\nTensor(f32[4, 1024, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1024, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.expand.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.expand.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.expand.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.expand.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.expand.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.mm.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.mm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.mm.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.mm.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.mm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_49. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_242 is not promoted. Already torch.float32.\nArgument p_trunk_attn_pool_q_bias is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Type promotion not needed for add_49. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 16, 64])\n## Return values\nTensor(f32[4, 1, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 16, 1, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 16, 1, 64])\n## Return values\nTensor(f32[4, 16, 1, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 16, 1, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 2048]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 2048])\n## Return values\nTensor(f32[1024, 2048])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 2048]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 2048]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 2048])\n## Return values\nTensor(f32[4096, 2048])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 2048]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 2048]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 2048])\n## Return values\nTensor(f32[4, 1024, 2048])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 2048]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 2, 16, 64]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 2, 16, 64])\n## Return values\nTensor(f32[4, 1024, 2, 16, 64])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 2, 16, 64]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[2, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[2, 4, 16, 1024, 64])\n## Return values\nTensor(f32[2, 4, 16, 1024, 64])", "text":"Skipped for fx.Node(aten.permute.default)[call_function]:Tensor(f32[2, 4, 16, 1024, 64]): Cannot find type promotion rule for op: aten.permute.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=2](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=2](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n## Return values\nList[length=2](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)", "text":"Skipped for fx.Node(aten.unbind.int)[call_function]:List[length=2](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n): Cannot find type promotion rule for op: aten.unbind.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n## Return values\nTensor(f32[4, 16, 1024, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n## Return values\nTuple[length=4](\nTensor(f32[4, 16, 1, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)", "text":"Skipped for fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n): Cannot find type promotion rule for op: aten._scaled_dot_product_efficient_attention.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1, 64]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1, 64])\n## Return values\nTensor(f32[4, 16, 1, 64])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 16, 1, 64]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1, 16, 64])\n## Return values\nTensor(f32[4, 1, 16, 64])", "text":"Skipped for fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1, 16, 64]): Cannot find type promotion rule for op: aten.transpose.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n## Return values\nTensor(f32[1024, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1, 1024]),\nTensor(f32[4, 1, 1]),\nTensor(f32[4, 1, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1, 1024]),\nTensor(f32[4, 1, 1]),\nTensor(f32[4, 1, 1]),\n)\n## Return values\nTuple[length=3](\nTensor(f32[4, 1, 1024]),\nTensor(f32[4, 1, 1]),\nTensor(f32[4, 1, 1]),\n)", "text":"Skipped for fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1, 1024]),\nTensor(f32[4, 1, 1]),\nTensor(f32[4, 1, 1]),\n): Cannot find type promotion rule for op: aten.native_layer_norm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1, 1024]): node.target is not OpOverload. Got type: \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node()[call_function]:Tensor(f32[4, 1, 1024]): node.target is not OpOverload. Got type: " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n## Return values\nTensor(f32[1024, 4096])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 4096]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 4096])\n## Return values\nTensor(f32[4, 4096])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 4096]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 4096])\n## Return values\nTensor(f32[4, 1, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for gelu_24. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1, 4096])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'gelu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument view_251 is not promoted. Already torch.float32.\nArgument tanh is not promoted. Not mentioned by type promotion rule.\n## Return values\nTensor(f32[4, 1, 4096])", "text":"Type promotion not needed for gelu_24. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 4096]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 4096])\n## Return values\nTensor(f32[4, 1, 4096])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 4096]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 4096]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 4096])\n## Return values\nTensor(f32[4, 4096])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 4096]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024])\n## Return values\nTensor(f32[4096, 1024])", "text":"Skipped for fx.Node(aten.t.default)[call_function]:Tensor(f32[4096, 1024]): Cannot find type promotion rule for op: aten.t.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.addmm.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.addmm.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.view.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Type promotion not needed for add_50. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1, 1024])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'add', [0, 1], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument clone_73 is not promoted. Already torch.float32.\nArgument clone_75 is not promoted. Already torch.float32.\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Type promotion not needed for add_50. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.slice.Tensor)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.slice.Tensor\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.slice.Tensor)[call_function]:Tensor(f32[4, 1, 1024])\n## Return values\nTensor(f32[4, 1, 1024])", "text":"Skipped for fx.Node(aten.slice.Tensor)[call_function]:Tensor(f32[4, 1, 1024]): Cannot find type promotion rule for op: aten.slice.Tensor" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.select.int)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.select.int\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.select.int)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.select.int)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.select.int" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.clone.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024])\n## Return values\nTensor(f32[4, 1024])", "text":"Skipped for fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024]): Cannot find type promotion rule for op: aten.clone.default" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Skipped output: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n## Return values\nTuple[length=1](\nTensor(f32[4, 1024]),\n)", "text":"Skipped output: not a call_function." }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"_TypePromotionInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":1625 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0015", "stacks":[] }, { "message":{ "markdown":"Running InsertTypePromotion pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: \nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Return values\ntorch.fx.GraphModule(GraphModule)", "text":"Running InsertTypePromotion pass. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"Transform.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":243 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0010", "stacks":[] }, { "message":{ "markdown":"Running Modularize pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: \nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Return values\ntorch.fx.GraphModule()", "text":"Running Modularize pass. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"Transform.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py" }, "region":{ "snippet":{ "text":"@diagnostics.diagnose_call(" }, "startLine":243 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0010", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_pos_embed[name=p_trunk_pos_embed]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_pos_embed)[placeholder]:Tensor(f32[1, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_pos_embed[name=p_trunk_pos_embed]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_latent[name=p_trunk_attn_pool_latent]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_latent)[placeholder]:Tensor(f32[1, 1, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_latent[name=p_trunk_attn_pool_latent]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_weight)[placeholder]:Tensor(f32[1024, 3, 16, 16])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=8](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=9](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=10](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=11](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=12](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=13](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=14](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=15](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_weight[name=p_getattr_l__self___trunk_blocks___1___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=16](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_weight[name=p_getattr_l__self___trunk_blocks___1___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_bias[name=p_getattr_l__self___trunk_blocks___1___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=17](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_bias[name=p_getattr_l__self___trunk_blocks___1___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=18](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=19](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___1___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=20](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___1___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___1___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=21](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___1___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_weight[name=p_getattr_l__self___trunk_blocks___1___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=22](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_weight[name=p_getattr_l__self___trunk_blocks___1___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_bias[name=p_getattr_l__self___trunk_blocks___1___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=23](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_bias[name=p_getattr_l__self___trunk_blocks___1___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=24](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=25](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=26](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=27](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_weight[name=p_getattr_l__self___trunk_blocks___2___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=28](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_weight[name=p_getattr_l__self___trunk_blocks___2___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_bias[name=p_getattr_l__self___trunk_blocks___2___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=29](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_bias[name=p_getattr_l__self___trunk_blocks___2___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=30](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=31](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___2___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=32](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___2___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___2___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=33](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___2___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_weight[name=p_getattr_l__self___trunk_blocks___2___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=34](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_weight[name=p_getattr_l__self___trunk_blocks___2___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_bias[name=p_getattr_l__self___trunk_blocks___2___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=35](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_bias[name=p_getattr_l__self___trunk_blocks___2___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=36](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=37](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=38](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=39](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_weight[name=p_getattr_l__self___trunk_blocks___3___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=40](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_weight[name=p_getattr_l__self___trunk_blocks___3___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_bias[name=p_getattr_l__self___trunk_blocks___3___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=41](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_bias[name=p_getattr_l__self___trunk_blocks___3___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=42](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=43](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___3___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=44](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___3___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___3___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=45](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___3___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_weight[name=p_getattr_l__self___trunk_blocks___3___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=46](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_weight[name=p_getattr_l__self___trunk_blocks___3___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_bias[name=p_getattr_l__self___trunk_blocks___3___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=47](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_bias[name=p_getattr_l__self___trunk_blocks___3___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=48](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=49](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=50](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=51](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_weight[name=p_getattr_l__self___trunk_blocks___4___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=52](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_weight[name=p_getattr_l__self___trunk_blocks___4___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_bias[name=p_getattr_l__self___trunk_blocks___4___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=53](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_bias[name=p_getattr_l__self___trunk_blocks___4___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=54](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=55](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___4___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=56](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___4___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___4___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=57](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___4___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_weight[name=p_getattr_l__self___trunk_blocks___4___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=58](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_weight[name=p_getattr_l__self___trunk_blocks___4___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_bias[name=p_getattr_l__self___trunk_blocks___4___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=59](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_bias[name=p_getattr_l__self___trunk_blocks___4___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=60](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=61](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=62](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=63](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_weight[name=p_getattr_l__self___trunk_blocks___5___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=64](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_weight[name=p_getattr_l__self___trunk_blocks___5___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_bias[name=p_getattr_l__self___trunk_blocks___5___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=65](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_bias[name=p_getattr_l__self___trunk_blocks___5___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=66](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=67](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___5___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=68](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___5___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___5___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=69](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___5___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_weight[name=p_getattr_l__self___trunk_blocks___5___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=70](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_weight[name=p_getattr_l__self___trunk_blocks___5___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_bias[name=p_getattr_l__self___trunk_blocks___5___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=71](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_bias[name=p_getattr_l__self___trunk_blocks___5___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=72](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=73](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=74](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=75](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_weight[name=p_getattr_l__self___trunk_blocks___6___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=76](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_weight[name=p_getattr_l__self___trunk_blocks___6___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_bias[name=p_getattr_l__self___trunk_blocks___6___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=77](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_bias[name=p_getattr_l__self___trunk_blocks___6___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=78](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=79](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___6___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=80](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___6___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___6___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=81](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___6___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_weight[name=p_getattr_l__self___trunk_blocks___6___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=82](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_weight[name=p_getattr_l__self___trunk_blocks___6___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_bias[name=p_getattr_l__self___trunk_blocks___6___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=83](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_bias[name=p_getattr_l__self___trunk_blocks___6___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=84](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=85](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=86](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=87](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_weight[name=p_getattr_l__self___trunk_blocks___7___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=88](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_weight[name=p_getattr_l__self___trunk_blocks___7___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_bias[name=p_getattr_l__self___trunk_blocks___7___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=89](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_bias[name=p_getattr_l__self___trunk_blocks___7___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=90](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=91](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___7___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=92](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___7___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___7___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=93](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___7___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_weight[name=p_getattr_l__self___trunk_blocks___7___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=94](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_weight[name=p_getattr_l__self___trunk_blocks___7___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_bias[name=p_getattr_l__self___trunk_blocks___7___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=95](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_bias[name=p_getattr_l__self___trunk_blocks___7___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=96](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=97](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=98](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=99](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_weight[name=p_getattr_l__self___trunk_blocks___8___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=100](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_weight[name=p_getattr_l__self___trunk_blocks___8___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_bias[name=p_getattr_l__self___trunk_blocks___8___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=101](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_bias[name=p_getattr_l__self___trunk_blocks___8___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=102](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=103](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___8___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=104](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___8___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___8___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=105](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___8___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_weight[name=p_getattr_l__self___trunk_blocks___8___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=106](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_weight[name=p_getattr_l__self___trunk_blocks___8___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_bias[name=p_getattr_l__self___trunk_blocks___8___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=107](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_bias[name=p_getattr_l__self___trunk_blocks___8___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=108](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=109](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=110](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=111](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_weight[name=p_getattr_l__self___trunk_blocks___9___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=112](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_weight[name=p_getattr_l__self___trunk_blocks___9___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_bias[name=p_getattr_l__self___trunk_blocks___9___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=113](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_bias[name=p_getattr_l__self___trunk_blocks___9___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=114](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=115](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___9___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=116](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___9___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___9___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=117](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___9___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_weight[name=p_getattr_l__self___trunk_blocks___9___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=118](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_weight[name=p_getattr_l__self___trunk_blocks___9___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_bias[name=p_getattr_l__self___trunk_blocks___9___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=119](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_bias[name=p_getattr_l__self___trunk_blocks___9___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=120](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=121](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=122](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=123](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_weight[name=p_getattr_l__self___trunk_blocks___10___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=124](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_weight[name=p_getattr_l__self___trunk_blocks___10___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_bias[name=p_getattr_l__self___trunk_blocks___10___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=125](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_bias[name=p_getattr_l__self___trunk_blocks___10___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=126](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=127](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___10___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=128](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___10___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___10___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=129](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___10___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_weight[name=p_getattr_l__self___trunk_blocks___10___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=130](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_weight[name=p_getattr_l__self___trunk_blocks___10___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_bias[name=p_getattr_l__self___trunk_blocks___10___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=131](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_bias[name=p_getattr_l__self___trunk_blocks___10___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=132](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=133](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=134](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=135](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_weight[name=p_getattr_l__self___trunk_blocks___11___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=136](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_weight[name=p_getattr_l__self___trunk_blocks___11___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_bias[name=p_getattr_l__self___trunk_blocks___11___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=137](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_bias[name=p_getattr_l__self___trunk_blocks___11___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=138](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=139](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___11___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=140](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___11___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___11___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=141](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___11___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_weight[name=p_getattr_l__self___trunk_blocks___11___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=142](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_weight[name=p_getattr_l__self___trunk_blocks___11___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_bias[name=p_getattr_l__self___trunk_blocks___11___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=143](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_bias[name=p_getattr_l__self___trunk_blocks___11___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=144](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=145](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=146](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=147](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_weight[name=p_getattr_l__self___trunk_blocks___12___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=148](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_weight[name=p_getattr_l__self___trunk_blocks___12___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_bias[name=p_getattr_l__self___trunk_blocks___12___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=149](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_bias[name=p_getattr_l__self___trunk_blocks___12___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=150](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=151](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___12___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=152](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___12___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___12___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=153](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___12___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_weight[name=p_getattr_l__self___trunk_blocks___12___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=154](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_weight[name=p_getattr_l__self___trunk_blocks___12___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_bias[name=p_getattr_l__self___trunk_blocks___12___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=155](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_bias[name=p_getattr_l__self___trunk_blocks___12___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=156](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=157](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=158](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=159](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_weight[name=p_getattr_l__self___trunk_blocks___13___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=160](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_weight[name=p_getattr_l__self___trunk_blocks___13___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_bias[name=p_getattr_l__self___trunk_blocks___13___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=161](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_bias[name=p_getattr_l__self___trunk_blocks___13___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=162](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=163](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___13___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=164](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___13___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___13___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=165](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___13___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_weight[name=p_getattr_l__self___trunk_blocks___13___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=166](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_weight[name=p_getattr_l__self___trunk_blocks___13___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_bias[name=p_getattr_l__self___trunk_blocks___13___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=167](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_bias[name=p_getattr_l__self___trunk_blocks___13___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=168](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=169](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=170](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=171](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_weight[name=p_getattr_l__self___trunk_blocks___14___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=172](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_weight[name=p_getattr_l__self___trunk_blocks___14___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_bias[name=p_getattr_l__self___trunk_blocks___14___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=173](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_bias[name=p_getattr_l__self___trunk_blocks___14___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=174](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=175](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___14___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=176](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___14___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___14___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=177](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___14___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_weight[name=p_getattr_l__self___trunk_blocks___14___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=178](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_weight[name=p_getattr_l__self___trunk_blocks___14___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_bias[name=p_getattr_l__self___trunk_blocks___14___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=179](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_bias[name=p_getattr_l__self___trunk_blocks___14___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=180](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=181](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=182](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=183](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_weight[name=p_getattr_l__self___trunk_blocks___15___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=184](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_weight[name=p_getattr_l__self___trunk_blocks___15___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_bias[name=p_getattr_l__self___trunk_blocks___15___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=185](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_bias[name=p_getattr_l__self___trunk_blocks___15___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=186](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=187](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___15___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=188](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___15___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___15___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=189](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___15___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_weight[name=p_getattr_l__self___trunk_blocks___15___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=190](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_weight[name=p_getattr_l__self___trunk_blocks___15___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_bias[name=p_getattr_l__self___trunk_blocks___15___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=191](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_bias[name=p_getattr_l__self___trunk_blocks___15___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=192](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=193](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=194](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=195](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_weight[name=p_getattr_l__self___trunk_blocks___16___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=196](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_weight[name=p_getattr_l__self___trunk_blocks___16___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_bias[name=p_getattr_l__self___trunk_blocks___16___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=197](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_bias[name=p_getattr_l__self___trunk_blocks___16___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=198](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=199](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___16___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=200](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___16___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___16___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=201](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___16___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_weight[name=p_getattr_l__self___trunk_blocks___16___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=202](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_weight[name=p_getattr_l__self___trunk_blocks___16___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_bias[name=p_getattr_l__self___trunk_blocks___16___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=203](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_bias[name=p_getattr_l__self___trunk_blocks___16___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=204](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=205](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=206](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=207](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_weight[name=p_getattr_l__self___trunk_blocks___17___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=208](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_weight[name=p_getattr_l__self___trunk_blocks___17___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_bias[name=p_getattr_l__self___trunk_blocks___17___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=209](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_bias[name=p_getattr_l__self___trunk_blocks___17___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=210](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=211](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___17___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=212](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___17___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___17___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=213](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___17___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_weight[name=p_getattr_l__self___trunk_blocks___17___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=214](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_weight[name=p_getattr_l__self___trunk_blocks___17___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_bias[name=p_getattr_l__self___trunk_blocks___17___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=215](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_bias[name=p_getattr_l__self___trunk_blocks___17___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=216](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=217](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=218](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=219](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_weight[name=p_getattr_l__self___trunk_blocks___18___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=220](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_weight[name=p_getattr_l__self___trunk_blocks___18___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_bias[name=p_getattr_l__self___trunk_blocks___18___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=221](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_bias[name=p_getattr_l__self___trunk_blocks___18___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=222](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=223](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___18___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=224](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___18___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___18___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=225](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___18___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_weight[name=p_getattr_l__self___trunk_blocks___18___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=226](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_weight[name=p_getattr_l__self___trunk_blocks___18___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_bias[name=p_getattr_l__self___trunk_blocks___18___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=227](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_bias[name=p_getattr_l__self___trunk_blocks___18___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=228](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=229](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=230](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=231](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_weight[name=p_getattr_l__self___trunk_blocks___19___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=232](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_weight[name=p_getattr_l__self___trunk_blocks___19___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_bias[name=p_getattr_l__self___trunk_blocks___19___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=233](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_bias[name=p_getattr_l__self___trunk_blocks___19___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=234](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=235](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___19___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=236](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___19___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___19___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=237](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___19___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_weight[name=p_getattr_l__self___trunk_blocks___19___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=238](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_weight[name=p_getattr_l__self___trunk_blocks___19___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_bias[name=p_getattr_l__self___trunk_blocks___19___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=239](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_bias[name=p_getattr_l__self___trunk_blocks___19___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=240](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=241](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=242](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=243](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_weight[name=p_getattr_l__self___trunk_blocks___20___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=244](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_weight[name=p_getattr_l__self___trunk_blocks___20___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_bias[name=p_getattr_l__self___trunk_blocks___20___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=245](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_bias[name=p_getattr_l__self___trunk_blocks___20___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=246](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=247](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___20___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=248](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___20___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___20___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=249](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___20___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_weight[name=p_getattr_l__self___trunk_blocks___20___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=250](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_weight[name=p_getattr_l__self___trunk_blocks___20___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_bias[name=p_getattr_l__self___trunk_blocks___20___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=251](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_bias[name=p_getattr_l__self___trunk_blocks___20___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=252](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=253](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=254](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=255](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_weight[name=p_getattr_l__self___trunk_blocks___21___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=256](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_weight[name=p_getattr_l__self___trunk_blocks___21___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_bias[name=p_getattr_l__self___trunk_blocks___21___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=257](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_bias[name=p_getattr_l__self___trunk_blocks___21___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=258](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=259](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___21___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=260](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___21___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___21___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=261](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___21___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_weight[name=p_getattr_l__self___trunk_blocks___21___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=262](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_weight[name=p_getattr_l__self___trunk_blocks___21___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_bias[name=p_getattr_l__self___trunk_blocks___21___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=263](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_bias[name=p_getattr_l__self___trunk_blocks___21___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=264](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=265](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=266](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=267](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_weight[name=p_getattr_l__self___trunk_blocks___22___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=268](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_weight[name=p_getattr_l__self___trunk_blocks___22___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_bias[name=p_getattr_l__self___trunk_blocks___22___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=269](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_bias[name=p_getattr_l__self___trunk_blocks___22___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=270](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=271](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___22___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=272](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___22___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___22___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=273](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___22___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_weight[name=p_getattr_l__self___trunk_blocks___22___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=274](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_weight[name=p_getattr_l__self___trunk_blocks___22___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_bias[name=p_getattr_l__self___trunk_blocks___22___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=275](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_bias[name=p_getattr_l__self___trunk_blocks___22___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=276](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=277](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=278](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=279](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_weight[name=p_getattr_l__self___trunk_blocks___23___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=280](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_weight[name=p_getattr_l__self___trunk_blocks___23___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_bias[name=p_getattr_l__self___trunk_blocks___23___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=281](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_bias[name=p_getattr_l__self___trunk_blocks___23___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=282](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=283](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___23___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=284](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___23___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___23___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=285](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___23___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_weight[name=p_getattr_l__self___trunk_blocks___23___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=286](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_weight[name=p_getattr_l__self___trunk_blocks___23___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_bias[name=p_getattr_l__self___trunk_blocks___23___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=287](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_bias[name=p_getattr_l__self___trunk_blocks___23___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=288](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=289](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=290](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=291](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_norm_weight[name=p_trunk_norm_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_norm_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=292](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_norm_weight[name=p_trunk_norm_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_norm_bias[name=p_trunk_norm_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_norm_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=293](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_norm_bias[name=p_trunk_norm_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_q_weight[name=p_trunk_attn_pool_q_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_q_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=294](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_q_weight[name=p_trunk_attn_pool_q_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_q_bias[name=p_trunk_attn_pool_q_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_q_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=295](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_q_bias[name=p_trunk_attn_pool_q_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_kv_weight[name=p_trunk_attn_pool_kv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_kv_weight)[placeholder]:Tensor(f32[2048, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=296](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_kv_weight[name=p_trunk_attn_pool_kv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_kv_bias[name=p_trunk_attn_pool_kv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_kv_bias)[placeholder]:Tensor(f32[2048])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=297](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_kv_bias[name=p_trunk_attn_pool_kv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_proj_weight[name=p_trunk_attn_pool_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=298](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_proj_weight[name=p_trunk_attn_pool_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_proj_bias[name=p_trunk_attn_pool_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=299](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_proj_bias[name=p_trunk_attn_pool_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_norm_weight[name=p_trunk_attn_pool_norm_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_norm_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=300](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_norm_weight[name=p_trunk_attn_pool_norm_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_norm_bias[name=p_trunk_attn_pool_norm_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_norm_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=301](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_norm_bias[name=p_trunk_attn_pool_norm_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_weight[name=p_trunk_attn_pool_mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=302](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_weight[name=p_trunk_attn_pool_mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_bias[name=p_trunk_attn_pool_mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=303](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_bias[name=p_trunk_attn_pool_mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_weight[name=p_trunk_attn_pool_mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=304](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_weight[name=p_trunk_attn_pool_mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_bias[name=p_trunk_attn_pool_mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=305](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_bias[name=p_trunk_attn_pool_mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:x[name=x]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(x)[placeholder]:Tensor(f32[4, 3, 512, 512])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=306](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:x[name=x]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:x[name=x]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(x)[placeholder]:Tensor(f32[4, 3, 512, 512])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## Return values\n", "text":"FX Node: placeholder:x[name=x]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_weight)[placeholder]:Tensor(f32[1024, 3, 16, 16])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_pos_embed[name=p_trunk_pos_embed]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_pos_embed)[placeholder]:Tensor(f32[1, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_pos_embed[name=p_trunk_pos_embed]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=8](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=9](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=10](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=11](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=12](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=13](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=14](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=15](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_weight[name=p_getattr_l__self___trunk_blocks___1___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=16](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_weight[name=p_getattr_l__self___trunk_blocks___1___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_bias[name=p_getattr_l__self___trunk_blocks___1___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=17](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_bias[name=p_getattr_l__self___trunk_blocks___1___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=18](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=19](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___1___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=20](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___1___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___1___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=21](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___1___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_weight[name=p_getattr_l__self___trunk_blocks___1___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=22](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_weight[name=p_getattr_l__self___trunk_blocks___1___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_bias[name=p_getattr_l__self___trunk_blocks___1___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=23](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_bias[name=p_getattr_l__self___trunk_blocks___1___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=24](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=25](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=26](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=27](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_weight[name=p_getattr_l__self___trunk_blocks___2___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=28](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_weight[name=p_getattr_l__self___trunk_blocks___2___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_bias[name=p_getattr_l__self___trunk_blocks___2___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=29](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_bias[name=p_getattr_l__self___trunk_blocks___2___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=30](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=31](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___2___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=32](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___2___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___2___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=33](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___2___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_weight[name=p_getattr_l__self___trunk_blocks___2___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=34](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_weight[name=p_getattr_l__self___trunk_blocks___2___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_bias[name=p_getattr_l__self___trunk_blocks___2___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=35](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_bias[name=p_getattr_l__self___trunk_blocks___2___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=36](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=37](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=38](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=39](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_weight[name=p_getattr_l__self___trunk_blocks___3___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=40](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_weight[name=p_getattr_l__self___trunk_blocks___3___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_bias[name=p_getattr_l__self___trunk_blocks___3___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=41](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_bias[name=p_getattr_l__self___trunk_blocks___3___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=42](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=43](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___3___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=44](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___3___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___3___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=45](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___3___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_weight[name=p_getattr_l__self___trunk_blocks___3___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=46](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_weight[name=p_getattr_l__self___trunk_blocks___3___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_bias[name=p_getattr_l__self___trunk_blocks___3___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=47](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_bias[name=p_getattr_l__self___trunk_blocks___3___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=48](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=49](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=50](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=51](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_weight[name=p_getattr_l__self___trunk_blocks___4___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=52](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_weight[name=p_getattr_l__self___trunk_blocks___4___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_bias[name=p_getattr_l__self___trunk_blocks___4___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=53](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_bias[name=p_getattr_l__self___trunk_blocks___4___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=54](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=55](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___4___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=56](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___4___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___4___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=57](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___4___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_weight[name=p_getattr_l__self___trunk_blocks___4___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=58](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_weight[name=p_getattr_l__self___trunk_blocks___4___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_bias[name=p_getattr_l__self___trunk_blocks___4___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=59](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_bias[name=p_getattr_l__self___trunk_blocks___4___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=60](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=61](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=62](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=63](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_weight[name=p_getattr_l__self___trunk_blocks___5___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=64](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_weight[name=p_getattr_l__self___trunk_blocks___5___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_bias[name=p_getattr_l__self___trunk_blocks___5___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=65](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_bias[name=p_getattr_l__self___trunk_blocks___5___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=66](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=67](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___5___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=68](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___5___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___5___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=69](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___5___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_weight[name=p_getattr_l__self___trunk_blocks___5___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=70](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_weight[name=p_getattr_l__self___trunk_blocks___5___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_bias[name=p_getattr_l__self___trunk_blocks___5___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=71](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_bias[name=p_getattr_l__self___trunk_blocks___5___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=72](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=73](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=74](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=75](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_weight[name=p_getattr_l__self___trunk_blocks___6___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=76](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_weight[name=p_getattr_l__self___trunk_blocks___6___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_bias[name=p_getattr_l__self___trunk_blocks___6___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=77](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_bias[name=p_getattr_l__self___trunk_blocks___6___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=78](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=79](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___6___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=80](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___6___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___6___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=81](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___6___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_weight[name=p_getattr_l__self___trunk_blocks___6___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=82](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_weight[name=p_getattr_l__self___trunk_blocks___6___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_bias[name=p_getattr_l__self___trunk_blocks___6___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=83](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_bias[name=p_getattr_l__self___trunk_blocks___6___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=84](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=85](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=86](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=87](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_weight[name=p_getattr_l__self___trunk_blocks___7___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=88](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_weight[name=p_getattr_l__self___trunk_blocks___7___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_bias[name=p_getattr_l__self___trunk_blocks___7___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=89](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_bias[name=p_getattr_l__self___trunk_blocks___7___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=90](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=91](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___7___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=92](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___7___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___7___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=93](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___7___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_weight[name=p_getattr_l__self___trunk_blocks___7___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=94](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_weight[name=p_getattr_l__self___trunk_blocks___7___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_bias[name=p_getattr_l__self___trunk_blocks___7___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=95](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_bias[name=p_getattr_l__self___trunk_blocks___7___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=96](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=97](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=98](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=99](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_weight[name=p_getattr_l__self___trunk_blocks___8___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=100](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_weight[name=p_getattr_l__self___trunk_blocks___8___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_bias[name=p_getattr_l__self___trunk_blocks___8___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=101](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_bias[name=p_getattr_l__self___trunk_blocks___8___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=102](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=103](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___8___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=104](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___8___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___8___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=105](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___8___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_weight[name=p_getattr_l__self___trunk_blocks___8___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=106](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_weight[name=p_getattr_l__self___trunk_blocks___8___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_bias[name=p_getattr_l__self___trunk_blocks___8___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=107](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_bias[name=p_getattr_l__self___trunk_blocks___8___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=108](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=109](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=110](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=111](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_weight[name=p_getattr_l__self___trunk_blocks___9___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=112](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_weight[name=p_getattr_l__self___trunk_blocks___9___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_bias[name=p_getattr_l__self___trunk_blocks___9___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=113](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_bias[name=p_getattr_l__self___trunk_blocks___9___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=114](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=115](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___9___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=116](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___9___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___9___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=117](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___9___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_weight[name=p_getattr_l__self___trunk_blocks___9___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=118](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_weight[name=p_getattr_l__self___trunk_blocks___9___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_bias[name=p_getattr_l__self___trunk_blocks___9___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=119](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_bias[name=p_getattr_l__self___trunk_blocks___9___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=120](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=121](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=122](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=123](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_weight[name=p_getattr_l__self___trunk_blocks___10___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=124](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_weight[name=p_getattr_l__self___trunk_blocks___10___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_bias[name=p_getattr_l__self___trunk_blocks___10___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=125](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_bias[name=p_getattr_l__self___trunk_blocks___10___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=126](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=127](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___10___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=128](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___10___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___10___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=129](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___10___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_weight[name=p_getattr_l__self___trunk_blocks___10___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=130](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_weight[name=p_getattr_l__self___trunk_blocks___10___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_bias[name=p_getattr_l__self___trunk_blocks___10___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=131](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_bias[name=p_getattr_l__self___trunk_blocks___10___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=132](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=133](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=134](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=135](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_weight[name=p_getattr_l__self___trunk_blocks___11___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=136](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_weight[name=p_getattr_l__self___trunk_blocks___11___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_bias[name=p_getattr_l__self___trunk_blocks___11___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=137](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_bias[name=p_getattr_l__self___trunk_blocks___11___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=138](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=139](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___11___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=140](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___11___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___11___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=141](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___11___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_weight[name=p_getattr_l__self___trunk_blocks___11___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=142](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_weight[name=p_getattr_l__self___trunk_blocks___11___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_bias[name=p_getattr_l__self___trunk_blocks___11___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=143](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_bias[name=p_getattr_l__self___trunk_blocks___11___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=144](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=145](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=146](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=147](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_weight[name=p_getattr_l__self___trunk_blocks___12___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=148](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_weight[name=p_getattr_l__self___trunk_blocks___12___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_bias[name=p_getattr_l__self___trunk_blocks___12___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=149](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_bias[name=p_getattr_l__self___trunk_blocks___12___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=150](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=151](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___12___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=152](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___12___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___12___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=153](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___12___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_weight[name=p_getattr_l__self___trunk_blocks___12___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=154](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_weight[name=p_getattr_l__self___trunk_blocks___12___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_bias[name=p_getattr_l__self___trunk_blocks___12___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=155](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_bias[name=p_getattr_l__self___trunk_blocks___12___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=156](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=157](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=158](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=159](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_weight[name=p_getattr_l__self___trunk_blocks___13___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=160](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_weight[name=p_getattr_l__self___trunk_blocks___13___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_bias[name=p_getattr_l__self___trunk_blocks___13___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=161](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_bias[name=p_getattr_l__self___trunk_blocks___13___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=162](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=163](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___13___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=164](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___13___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___13___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=165](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___13___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_weight[name=p_getattr_l__self___trunk_blocks___13___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=166](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_weight[name=p_getattr_l__self___trunk_blocks___13___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_bias[name=p_getattr_l__self___trunk_blocks___13___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=167](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_bias[name=p_getattr_l__self___trunk_blocks___13___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=168](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=169](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=170](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=171](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_weight[name=p_getattr_l__self___trunk_blocks___14___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=172](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_weight[name=p_getattr_l__self___trunk_blocks___14___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_bias[name=p_getattr_l__self___trunk_blocks___14___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=173](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_bias[name=p_getattr_l__self___trunk_blocks___14___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=174](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=175](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___14___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=176](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___14___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___14___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=177](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___14___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_weight[name=p_getattr_l__self___trunk_blocks___14___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=178](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_weight[name=p_getattr_l__self___trunk_blocks___14___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_bias[name=p_getattr_l__self___trunk_blocks___14___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=179](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_bias[name=p_getattr_l__self___trunk_blocks___14___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=180](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=181](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=182](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=183](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_weight[name=p_getattr_l__self___trunk_blocks___15___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=184](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_weight[name=p_getattr_l__self___trunk_blocks___15___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_bias[name=p_getattr_l__self___trunk_blocks___15___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=185](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_bias[name=p_getattr_l__self___trunk_blocks___15___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=186](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=187](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___15___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=188](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___15___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___15___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=189](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___15___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_weight[name=p_getattr_l__self___trunk_blocks___15___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=190](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_weight[name=p_getattr_l__self___trunk_blocks___15___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_bias[name=p_getattr_l__self___trunk_blocks___15___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=191](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_bias[name=p_getattr_l__self___trunk_blocks___15___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=192](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=193](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=194](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=195](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_weight[name=p_getattr_l__self___trunk_blocks___16___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=196](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_weight[name=p_getattr_l__self___trunk_blocks___16___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_bias[name=p_getattr_l__self___trunk_blocks___16___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=197](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_bias[name=p_getattr_l__self___trunk_blocks___16___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=198](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=199](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___16___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=200](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___16___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___16___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=201](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___16___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_weight[name=p_getattr_l__self___trunk_blocks___16___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=202](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_weight[name=p_getattr_l__self___trunk_blocks___16___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_bias[name=p_getattr_l__self___trunk_blocks___16___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=203](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_bias[name=p_getattr_l__self___trunk_blocks___16___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=204](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=205](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=206](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=207](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_weight[name=p_getattr_l__self___trunk_blocks___17___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=208](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_weight[name=p_getattr_l__self___trunk_blocks___17___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_bias[name=p_getattr_l__self___trunk_blocks___17___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=209](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_bias[name=p_getattr_l__self___trunk_blocks___17___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=210](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=211](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___17___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=212](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___17___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___17___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=213](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___17___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_weight[name=p_getattr_l__self___trunk_blocks___17___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=214](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_weight[name=p_getattr_l__self___trunk_blocks___17___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_bias[name=p_getattr_l__self___trunk_blocks___17___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=215](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_bias[name=p_getattr_l__self___trunk_blocks___17___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=216](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=217](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=218](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=219](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_weight[name=p_getattr_l__self___trunk_blocks___18___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=220](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_weight[name=p_getattr_l__self___trunk_blocks___18___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_bias[name=p_getattr_l__self___trunk_blocks___18___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=221](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_bias[name=p_getattr_l__self___trunk_blocks___18___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=222](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=223](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___18___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=224](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___18___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___18___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=225](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___18___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_weight[name=p_getattr_l__self___trunk_blocks___18___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=226](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_weight[name=p_getattr_l__self___trunk_blocks___18___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_bias[name=p_getattr_l__self___trunk_blocks___18___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=227](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_bias[name=p_getattr_l__self___trunk_blocks___18___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=228](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=229](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=230](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=231](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_weight[name=p_getattr_l__self___trunk_blocks___19___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=232](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_weight[name=p_getattr_l__self___trunk_blocks___19___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_bias[name=p_getattr_l__self___trunk_blocks___19___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=233](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_bias[name=p_getattr_l__self___trunk_blocks___19___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=234](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=235](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___19___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=236](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___19___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___19___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=237](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___19___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_weight[name=p_getattr_l__self___trunk_blocks___19___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=238](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_weight[name=p_getattr_l__self___trunk_blocks___19___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_bias[name=p_getattr_l__self___trunk_blocks___19___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=239](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_bias[name=p_getattr_l__self___trunk_blocks___19___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=240](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=241](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=242](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=243](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_weight[name=p_getattr_l__self___trunk_blocks___20___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=244](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_weight[name=p_getattr_l__self___trunk_blocks___20___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_bias[name=p_getattr_l__self___trunk_blocks___20___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=245](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_bias[name=p_getattr_l__self___trunk_blocks___20___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=246](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=247](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___20___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=248](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___20___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___20___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=249](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___20___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_weight[name=p_getattr_l__self___trunk_blocks___20___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=250](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_weight[name=p_getattr_l__self___trunk_blocks___20___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_bias[name=p_getattr_l__self___trunk_blocks___20___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=251](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_bias[name=p_getattr_l__self___trunk_blocks___20___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=252](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=253](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=254](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=255](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_weight[name=p_getattr_l__self___trunk_blocks___21___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=256](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_weight[name=p_getattr_l__self___trunk_blocks___21___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_bias[name=p_getattr_l__self___trunk_blocks___21___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=257](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_bias[name=p_getattr_l__self___trunk_blocks___21___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=258](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=259](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___21___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=260](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___21___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___21___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=261](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___21___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_weight[name=p_getattr_l__self___trunk_blocks___21___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=262](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_weight[name=p_getattr_l__self___trunk_blocks___21___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_bias[name=p_getattr_l__self___trunk_blocks___21___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=263](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_bias[name=p_getattr_l__self___trunk_blocks___21___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=264](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=265](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=266](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=267](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_weight[name=p_getattr_l__self___trunk_blocks___22___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=268](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_weight[name=p_getattr_l__self___trunk_blocks___22___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_bias[name=p_getattr_l__self___trunk_blocks___22___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=269](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_bias[name=p_getattr_l__self___trunk_blocks___22___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=270](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=271](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___22___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=272](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___22___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___22___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=273](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___22___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_weight[name=p_getattr_l__self___trunk_blocks___22___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=274](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_weight[name=p_getattr_l__self___trunk_blocks___22___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_bias[name=p_getattr_l__self___trunk_blocks___22___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=275](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_bias[name=p_getattr_l__self___trunk_blocks___22___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=276](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=277](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=278](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=279](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_weight[name=p_getattr_l__self___trunk_blocks___23___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=280](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_weight[name=p_getattr_l__self___trunk_blocks___23___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_bias[name=p_getattr_l__self___trunk_blocks___23___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=281](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_bias[name=p_getattr_l__self___trunk_blocks___23___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=282](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=283](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___23___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=284](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___23___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___23___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=285](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___23___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_weight[name=p_getattr_l__self___trunk_blocks___23___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=286](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_weight[name=p_getattr_l__self___trunk_blocks___23___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_bias[name=p_getattr_l__self___trunk_blocks___23___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=287](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_bias[name=p_getattr_l__self___trunk_blocks___23___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=288](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=289](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=290](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=291](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_norm_weight[name=p_trunk_norm_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_norm_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=292](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_norm_weight[name=p_trunk_norm_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_norm_bias[name=p_trunk_norm_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_norm_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=293](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_norm_bias[name=p_trunk_norm_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_latent[name=p_trunk_attn_pool_latent]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_latent)[placeholder]:Tensor(f32[1, 1, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=294](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_latent[name=p_trunk_attn_pool_latent]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_q_weight[name=p_trunk_attn_pool_q_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_q_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=295](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_q_weight[name=p_trunk_attn_pool_q_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_q_bias[name=p_trunk_attn_pool_q_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_q_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=296](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_q_bias[name=p_trunk_attn_pool_q_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_kv_weight[name=p_trunk_attn_pool_kv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_kv_weight)[placeholder]:Tensor(f32[2048, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=297](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_kv_weight[name=p_trunk_attn_pool_kv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_kv_bias[name=p_trunk_attn_pool_kv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_kv_bias)[placeholder]:Tensor(f32[2048])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=298](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_kv_bias[name=p_trunk_attn_pool_kv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_proj_weight[name=p_trunk_attn_pool_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=299](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_proj_weight[name=p_trunk_attn_pool_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_proj_bias[name=p_trunk_attn_pool_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=300](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_proj_bias[name=p_trunk_attn_pool_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_norm_weight[name=p_trunk_attn_pool_norm_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_norm_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=301](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_norm_weight[name=p_trunk_attn_pool_norm_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_norm_bias[name=p_trunk_attn_pool_norm_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_norm_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=302](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_norm_bias[name=p_trunk_attn_pool_norm_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_weight[name=p_trunk_attn_pool_mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=303](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_weight[name=p_trunk_attn_pool_mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_bias[name=p_trunk_attn_pool_mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=304](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc1_bias[name=p_trunk_attn_pool_mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_weight[name=p_trunk_attn_pool_mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=305](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_weight[name=p_trunk_attn_pool_mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_bias[name=p_trunk_attn_pool_mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_attn_pool_mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=306](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_attn_pool_mlp_fc2_bias[name=p_trunk_attn_pool_mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:x[name=x]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(x)[placeholder]:Tensor(f32[4, 3, 512, 512])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## Return values\n", "text":"FX Node: placeholder:x[name=x]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_weight)[placeholder]:Tensor(f32[1024, 3, 16, 16])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:x[name=x]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(x)[placeholder]:Tensor(f32[4, 3, 512, 512])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_conv_Conv2d)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## Return values\n", "text":"FX Node: placeholder:x[name=x]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_weight)[placeholder]:Tensor(f32[1024, 3, 16, 16])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_conv_Conv2d)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_weight[name=p_trunk_patch_embed_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_trunk_patch_embed_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_conv_Conv2d)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_trunk_patch_embed_proj_bias[name=p_trunk_patch_embed_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.convolution.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.convolution.default)[call_function]:Tensor(f32[4, 1024, 32, 32])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::convolution.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.convolution.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.convolution.default. \nONNX Node: aten_convolution[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.convolution.default)[call_function]:Tensor(f32[4, 1024, 32, 32])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::convolution.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=9](\n`TorchScriptTensor(f32[4, 3, 512, 512])`,\n`TorchScriptTensor(f32[1024, 3, 16, 16])`,\n`TorchScriptTensor(f32[1024])`,\nList[length=2](\n16,\n16,\n),\nList[length=2](\n0,\n0,\n),\nList[length=2](\n1,\n1,\n),\nFalse,\nList[length=2](\n0,\n0,\n),\n1,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_convolution)`\nmatch score: -1\n## Return values\n`TracedOnnxFunction(aten_convolution)`", "text":"FX Node: aten.convolution.default. \nONNX Node: aten_convolution[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.convolution.default[name=convolution]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.convolution.default)[call_function]:Tensor(f32[4, 1024, 32, 32])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_conv_Conv2d)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 827, in forward_features\n x = self.patch_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/patch_embed.py\", line 131, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.convolution.default[name=convolution]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_conv_Conv2d)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\nconvolution: `TorchScriptTensor(f32[4, 1024, 32, 32])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_conv_Conv2d. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_conv_Conv2d)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_conv_Conv2d. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:trunk_patch_embed_proj_1[name=trunk_patch_embed_proj_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(trunk_patch_embed_proj_1)[call_module]:Tensor(f32[4, 1024, 32, 32])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 827, in forward_features\n x = self.patch_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/patch_embed.py\", line 131, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_module:trunk_patch_embed_proj_1[name=trunk_patch_embed_proj_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 32, 32])`,\nList[length=3](\n4,\n1024,\n1024,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\ntrunk_patch_embed_proj_1: `TorchScriptTensor(f32[4, 1024, 32, 32])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 827, in forward_features\n x = self.patch_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/patch_embed.py\", line 133, in forward\n x = x.flatten(2).transpose(1, 2) # NCHW -> NLC\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.transpose.int' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::transpose.int, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.transpose.int' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.transpose.int. \nONNX Node: aten_transpose[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::transpose.int, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=3](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n1,\n2,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_transpose)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_transpose)`", "text":"FX Node: aten.transpose.int. \nONNX Node: aten_transpose[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.transpose.int[name=transpose]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\ntrunk_patch_embed_proj_1: `TorchScriptTensor(f32[4, 1024, 32, 32])`,\nview: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 827, in forward_features\n x = self.patch_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/patch_embed.py\", line 133, in forward\n x = x.flatten(2).transpose(1, 2) # NCHW -> NLC\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.transpose.int[name=transpose]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\ntrunk_patch_embed_proj_1: `TorchScriptTensor(f32[4, 1024, 32, 32])`,\nview: `TorchScriptTensor(f32[4, 1024, 1024])`,\ntranspose: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: timm_layers_patch_embed_PatchEmbed. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(timm_layers_patch_embed_PatchEmbed)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: timm_layers_patch_embed_PatchEmbed. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:trunk_patch_embed_1[name=trunk_patch_embed_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(trunk_patch_embed_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=307](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 827, in forward_features\n x = self.patch_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/patch_embed.py\", line 131, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_module:trunk_patch_embed_1[name=trunk_patch_embed_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.add.Tensor' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=2](\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.add.Tensor' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.add.Tensor. \nONNX Node: aten_add[opset=pkg.onnxscript.torch_lib;is_custom=False]. \nONNX Node: aten_logical_or[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=2](\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n`TorchScriptTensor(f32[1, 1024, 1024])`,\n)\n- onnx_kwargs: Dict[length=1](\nalpha: 1,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_logical_or)`\n### Failed: attribute mismatch!\nActual {'alpha'} vs expected set()\nThe function is not a nearest match candidate.\n## Checking perfect match...\n`TracedOnnxFunction(aten_add)`\n### Failed: attribute 'alpha' type mismatch!\nActual vs\nExpected AttrType.FLOAT\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`TracedOnnxFunction(aten_add)`", "text":"FX Node: aten.add.Tensor. \nONNX Node: aten_add[opset=pkg.onnxscript.torch_lib;is_custom=False]. \nONNX Node: aten_logical_or[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.add.Tensor[name=add]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=308](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 718, in _pos_embed\n x = x + pos_embed\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.add.Tensor[name=add]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:add[name=add]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(add)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 718, in _pos_embed\n x = x + pos_embed\n\n```\n## Return values\n", "text":"FX Node: placeholder:add[name=add]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.clone.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::clone.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.clone.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.clone.default. \nONNX Node: aten_clone[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::clone.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_clone)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_clone)`", "text":"FX Node: aten.clone.default. \nONNX Node: aten_clone[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.clone.default[name=clone]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nadd: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 720, in _pos_embed\n return self.pos_drop(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.clone.default[name=clone]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nadd: `TorchScriptTensor(f32[4, 1024, 1024])`,\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_dropout_Dropout. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_dropout_Dropout. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:trunk_pos_drop_1[name=trunk_pos_drop_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(trunk_pos_drop_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=309](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 720, in _pos_embed\n return self.pos_drop(x)\n\n```\n## Return values\n", "text":"FX Node: call_module:trunk_pos_drop_1[name=trunk_pos_drop_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:clone[name=clone]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(clone)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 720, in _pos_embed\n return self.pos_drop(x)\n\n```\n## Return values\n", "text":"FX Node: placeholder:clone[name=clone]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=8](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=9](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=10](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=11](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=12](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_weight[name=p_getattr_l__self___trunk_blocks___1___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=13](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_weight[name=p_getattr_l__self___trunk_blocks___1___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_bias[name=p_getattr_l__self___trunk_blocks___1___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=14](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm1_bias[name=p_getattr_l__self___trunk_blocks___1___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=15](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=16](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___1___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___1___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=17](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___1___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___1___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=18](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___1___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_weight[name=p_getattr_l__self___trunk_blocks___1___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=19](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_weight[name=p_getattr_l__self___trunk_blocks___1___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_bias[name=p_getattr_l__self___trunk_blocks___1___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=20](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___norm2_bias[name=p_getattr_l__self___trunk_blocks___1___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=21](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=22](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=23](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=24](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___1___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_weight[name=p_getattr_l__self___trunk_blocks___2___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=25](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_weight[name=p_getattr_l__self___trunk_blocks___2___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_bias[name=p_getattr_l__self___trunk_blocks___2___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=26](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm1_bias[name=p_getattr_l__self___trunk_blocks___2___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=27](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=28](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___2___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___2___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=29](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___2___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___2___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=30](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___2___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_weight[name=p_getattr_l__self___trunk_blocks___2___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=31](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_weight[name=p_getattr_l__self___trunk_blocks___2___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_bias[name=p_getattr_l__self___trunk_blocks___2___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=32](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___norm2_bias[name=p_getattr_l__self___trunk_blocks___2___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=33](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=34](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=35](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=36](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___2___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_weight[name=p_getattr_l__self___trunk_blocks___3___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=37](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_weight[name=p_getattr_l__self___trunk_blocks___3___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_bias[name=p_getattr_l__self___trunk_blocks___3___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=38](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm1_bias[name=p_getattr_l__self___trunk_blocks___3___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=39](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=40](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___3___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___3___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=41](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___3___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___3___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=42](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___3___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_weight[name=p_getattr_l__self___trunk_blocks___3___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=43](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_weight[name=p_getattr_l__self___trunk_blocks___3___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_bias[name=p_getattr_l__self___trunk_blocks___3___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=44](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___norm2_bias[name=p_getattr_l__self___trunk_blocks___3___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=45](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=46](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=47](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=48](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___3___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_weight[name=p_getattr_l__self___trunk_blocks___4___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=49](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_weight[name=p_getattr_l__self___trunk_blocks___4___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_bias[name=p_getattr_l__self___trunk_blocks___4___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=50](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm1_bias[name=p_getattr_l__self___trunk_blocks___4___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=51](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=52](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___4___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___4___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=53](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___4___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___4___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=54](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___4___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_weight[name=p_getattr_l__self___trunk_blocks___4___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=55](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_weight[name=p_getattr_l__self___trunk_blocks___4___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_bias[name=p_getattr_l__self___trunk_blocks___4___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=56](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___norm2_bias[name=p_getattr_l__self___trunk_blocks___4___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=57](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=58](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=59](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=60](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___4___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_weight[name=p_getattr_l__self___trunk_blocks___5___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=61](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_weight[name=p_getattr_l__self___trunk_blocks___5___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_bias[name=p_getattr_l__self___trunk_blocks___5___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=62](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm1_bias[name=p_getattr_l__self___trunk_blocks___5___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=63](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=64](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___5___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___5___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=65](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___5___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___5___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=66](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___5___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_weight[name=p_getattr_l__self___trunk_blocks___5___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=67](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_weight[name=p_getattr_l__self___trunk_blocks___5___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_bias[name=p_getattr_l__self___trunk_blocks___5___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=68](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___norm2_bias[name=p_getattr_l__self___trunk_blocks___5___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=69](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=70](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=71](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=72](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___5___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_weight[name=p_getattr_l__self___trunk_blocks___6___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=73](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_weight[name=p_getattr_l__self___trunk_blocks___6___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_bias[name=p_getattr_l__self___trunk_blocks___6___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=74](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm1_bias[name=p_getattr_l__self___trunk_blocks___6___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=75](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=76](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___6___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___6___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=77](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___6___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___6___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=78](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___6___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_weight[name=p_getattr_l__self___trunk_blocks___6___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=79](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_weight[name=p_getattr_l__self___trunk_blocks___6___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_bias[name=p_getattr_l__self___trunk_blocks___6___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=80](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___norm2_bias[name=p_getattr_l__self___trunk_blocks___6___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=81](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=82](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=83](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=84](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___6___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_weight[name=p_getattr_l__self___trunk_blocks___7___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=85](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_weight[name=p_getattr_l__self___trunk_blocks___7___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_bias[name=p_getattr_l__self___trunk_blocks___7___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=86](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm1_bias[name=p_getattr_l__self___trunk_blocks___7___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=87](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=88](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___7___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___7___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=89](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___7___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___7___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=90](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___7___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_weight[name=p_getattr_l__self___trunk_blocks___7___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=91](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_weight[name=p_getattr_l__self___trunk_blocks___7___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_bias[name=p_getattr_l__self___trunk_blocks___7___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=92](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___norm2_bias[name=p_getattr_l__self___trunk_blocks___7___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=93](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=94](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=95](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=96](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___7___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_weight[name=p_getattr_l__self___trunk_blocks___8___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=97](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_weight[name=p_getattr_l__self___trunk_blocks___8___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_bias[name=p_getattr_l__self___trunk_blocks___8___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=98](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm1_bias[name=p_getattr_l__self___trunk_blocks___8___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=99](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=100](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___8___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___8___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=101](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___8___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___8___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=102](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___8___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_weight[name=p_getattr_l__self___trunk_blocks___8___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=103](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_weight[name=p_getattr_l__self___trunk_blocks___8___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_bias[name=p_getattr_l__self___trunk_blocks___8___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=104](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___norm2_bias[name=p_getattr_l__self___trunk_blocks___8___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=105](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=106](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=107](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=108](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___8___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_weight[name=p_getattr_l__self___trunk_blocks___9___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=109](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_weight[name=p_getattr_l__self___trunk_blocks___9___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_bias[name=p_getattr_l__self___trunk_blocks___9___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=110](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm1_bias[name=p_getattr_l__self___trunk_blocks___9___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=111](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=112](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___9___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___9___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=113](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___9___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___9___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=114](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___9___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_weight[name=p_getattr_l__self___trunk_blocks___9___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=115](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_weight[name=p_getattr_l__self___trunk_blocks___9___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_bias[name=p_getattr_l__self___trunk_blocks___9___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=116](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___norm2_bias[name=p_getattr_l__self___trunk_blocks___9___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=117](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=118](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=119](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=120](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___9___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_weight[name=p_getattr_l__self___trunk_blocks___10___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=121](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_weight[name=p_getattr_l__self___trunk_blocks___10___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_bias[name=p_getattr_l__self___trunk_blocks___10___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=122](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm1_bias[name=p_getattr_l__self___trunk_blocks___10___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=123](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=124](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___10___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___10___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=125](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___10___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___10___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=126](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___10___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_weight[name=p_getattr_l__self___trunk_blocks___10___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=127](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_weight[name=p_getattr_l__self___trunk_blocks___10___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_bias[name=p_getattr_l__self___trunk_blocks___10___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=128](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___norm2_bias[name=p_getattr_l__self___trunk_blocks___10___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=129](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=130](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=131](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=132](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___10___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_weight[name=p_getattr_l__self___trunk_blocks___11___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=133](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_weight[name=p_getattr_l__self___trunk_blocks___11___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_bias[name=p_getattr_l__self___trunk_blocks___11___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=134](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm1_bias[name=p_getattr_l__self___trunk_blocks___11___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=135](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=136](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___11___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___11___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=137](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___11___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___11___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=138](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___11___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_weight[name=p_getattr_l__self___trunk_blocks___11___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=139](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_weight[name=p_getattr_l__self___trunk_blocks___11___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_bias[name=p_getattr_l__self___trunk_blocks___11___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=140](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___norm2_bias[name=p_getattr_l__self___trunk_blocks___11___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=141](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=142](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=143](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=144](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___11___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_weight[name=p_getattr_l__self___trunk_blocks___12___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=145](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_weight[name=p_getattr_l__self___trunk_blocks___12___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_bias[name=p_getattr_l__self___trunk_blocks___12___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=146](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm1_bias[name=p_getattr_l__self___trunk_blocks___12___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=147](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=148](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___12___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___12___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=149](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___12___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___12___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=150](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___12___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_weight[name=p_getattr_l__self___trunk_blocks___12___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=151](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_weight[name=p_getattr_l__self___trunk_blocks___12___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_bias[name=p_getattr_l__self___trunk_blocks___12___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=152](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___norm2_bias[name=p_getattr_l__self___trunk_blocks___12___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=153](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=154](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=155](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=156](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___12___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_weight[name=p_getattr_l__self___trunk_blocks___13___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=157](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_weight[name=p_getattr_l__self___trunk_blocks___13___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_bias[name=p_getattr_l__self___trunk_blocks___13___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=158](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm1_bias[name=p_getattr_l__self___trunk_blocks___13___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=159](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=160](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___13___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___13___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=161](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___13___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___13___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=162](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___13___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_weight[name=p_getattr_l__self___trunk_blocks___13___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=163](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_weight[name=p_getattr_l__self___trunk_blocks___13___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_bias[name=p_getattr_l__self___trunk_blocks___13___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=164](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___norm2_bias[name=p_getattr_l__self___trunk_blocks___13___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=165](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=166](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=167](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=168](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___13___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_weight[name=p_getattr_l__self___trunk_blocks___14___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=169](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_weight[name=p_getattr_l__self___trunk_blocks___14___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_bias[name=p_getattr_l__self___trunk_blocks___14___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=170](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm1_bias[name=p_getattr_l__self___trunk_blocks___14___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=171](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=172](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___14___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___14___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=173](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___14___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___14___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=174](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___14___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_weight[name=p_getattr_l__self___trunk_blocks___14___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=175](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_weight[name=p_getattr_l__self___trunk_blocks___14___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_bias[name=p_getattr_l__self___trunk_blocks___14___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=176](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___norm2_bias[name=p_getattr_l__self___trunk_blocks___14___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=177](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=178](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=179](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=180](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___14___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_weight[name=p_getattr_l__self___trunk_blocks___15___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=181](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_weight[name=p_getattr_l__self___trunk_blocks___15___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_bias[name=p_getattr_l__self___trunk_blocks___15___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=182](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm1_bias[name=p_getattr_l__self___trunk_blocks___15___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=183](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=184](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___15___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___15___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=185](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___15___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___15___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=186](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___15___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_weight[name=p_getattr_l__self___trunk_blocks___15___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=187](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_weight[name=p_getattr_l__self___trunk_blocks___15___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_bias[name=p_getattr_l__self___trunk_blocks___15___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=188](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___norm2_bias[name=p_getattr_l__self___trunk_blocks___15___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=189](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=190](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=191](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=192](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___15___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_weight[name=p_getattr_l__self___trunk_blocks___16___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=193](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_weight[name=p_getattr_l__self___trunk_blocks___16___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_bias[name=p_getattr_l__self___trunk_blocks___16___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=194](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm1_bias[name=p_getattr_l__self___trunk_blocks___16___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=195](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=196](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___16___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___16___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=197](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___16___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___16___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=198](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___16___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_weight[name=p_getattr_l__self___trunk_blocks___16___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=199](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_weight[name=p_getattr_l__self___trunk_blocks___16___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_bias[name=p_getattr_l__self___trunk_blocks___16___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=200](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___norm2_bias[name=p_getattr_l__self___trunk_blocks___16___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=201](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=202](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=203](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=204](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___16___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_weight[name=p_getattr_l__self___trunk_blocks___17___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=205](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_weight[name=p_getattr_l__self___trunk_blocks___17___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_bias[name=p_getattr_l__self___trunk_blocks___17___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=206](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm1_bias[name=p_getattr_l__self___trunk_blocks___17___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=207](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=208](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___17___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___17___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=209](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___17___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___17___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=210](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___17___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_weight[name=p_getattr_l__self___trunk_blocks___17___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=211](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_weight[name=p_getattr_l__self___trunk_blocks___17___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_bias[name=p_getattr_l__self___trunk_blocks___17___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=212](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___norm2_bias[name=p_getattr_l__self___trunk_blocks___17___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=213](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=214](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=215](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=216](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___17___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_weight[name=p_getattr_l__self___trunk_blocks___18___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=217](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_weight[name=p_getattr_l__self___trunk_blocks___18___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_bias[name=p_getattr_l__self___trunk_blocks___18___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=218](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm1_bias[name=p_getattr_l__self___trunk_blocks___18___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=219](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=220](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___18___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___18___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=221](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___18___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___18___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=222](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___18___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_weight[name=p_getattr_l__self___trunk_blocks___18___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=223](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_weight[name=p_getattr_l__self___trunk_blocks___18___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_bias[name=p_getattr_l__self___trunk_blocks___18___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=224](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___norm2_bias[name=p_getattr_l__self___trunk_blocks___18___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=225](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=226](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=227](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=228](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___18___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_weight[name=p_getattr_l__self___trunk_blocks___19___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=229](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_weight[name=p_getattr_l__self___trunk_blocks___19___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_bias[name=p_getattr_l__self___trunk_blocks___19___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=230](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm1_bias[name=p_getattr_l__self___trunk_blocks___19___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=231](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=232](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___19___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___19___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=233](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___19___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___19___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=234](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___19___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_weight[name=p_getattr_l__self___trunk_blocks___19___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=235](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_weight[name=p_getattr_l__self___trunk_blocks___19___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_bias[name=p_getattr_l__self___trunk_blocks___19___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=236](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___norm2_bias[name=p_getattr_l__self___trunk_blocks___19___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=237](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=238](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=239](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=240](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___19___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_weight[name=p_getattr_l__self___trunk_blocks___20___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=241](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_weight[name=p_getattr_l__self___trunk_blocks___20___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_bias[name=p_getattr_l__self___trunk_blocks___20___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=242](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm1_bias[name=p_getattr_l__self___trunk_blocks___20___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=243](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=244](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___20___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___20___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=245](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___20___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___20___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=246](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___20___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_weight[name=p_getattr_l__self___trunk_blocks___20___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=247](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_weight[name=p_getattr_l__self___trunk_blocks___20___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_bias[name=p_getattr_l__self___trunk_blocks___20___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=248](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___norm2_bias[name=p_getattr_l__self___trunk_blocks___20___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=249](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=250](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=251](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=252](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___20___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_weight[name=p_getattr_l__self___trunk_blocks___21___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=253](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_weight[name=p_getattr_l__self___trunk_blocks___21___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_bias[name=p_getattr_l__self___trunk_blocks___21___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=254](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm1_bias[name=p_getattr_l__self___trunk_blocks___21___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=255](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=256](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___21___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___21___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=257](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___21___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___21___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=258](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___21___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_weight[name=p_getattr_l__self___trunk_blocks___21___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=259](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_weight[name=p_getattr_l__self___trunk_blocks___21___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_bias[name=p_getattr_l__self___trunk_blocks___21___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=260](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___norm2_bias[name=p_getattr_l__self___trunk_blocks___21___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=261](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=262](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=263](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=264](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___21___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_weight[name=p_getattr_l__self___trunk_blocks___22___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=265](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_weight[name=p_getattr_l__self___trunk_blocks___22___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_bias[name=p_getattr_l__self___trunk_blocks___22___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=266](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm1_bias[name=p_getattr_l__self___trunk_blocks___22___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=267](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=268](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___22___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___22___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=269](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___22___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___22___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=270](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___22___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_weight[name=p_getattr_l__self___trunk_blocks___22___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=271](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_weight[name=p_getattr_l__self___trunk_blocks___22___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_bias[name=p_getattr_l__self___trunk_blocks___22___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=272](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___norm2_bias[name=p_getattr_l__self___trunk_blocks___22___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=273](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=274](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=275](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=276](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___22___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_weight[name=p_getattr_l__self___trunk_blocks___23___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=277](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_weight[name=p_getattr_l__self___trunk_blocks___23___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_bias[name=p_getattr_l__self___trunk_blocks___23___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=278](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm1_bias[name=p_getattr_l__self___trunk_blocks___23___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=279](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=280](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___23___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___23___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=281](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___23___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___23___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=282](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___23___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_weight[name=p_getattr_l__self___trunk_blocks___23___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=283](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_weight[name=p_getattr_l__self___trunk_blocks___23___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_bias[name=p_getattr_l__self___trunk_blocks___23___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=284](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___norm2_bias[name=p_getattr_l__self___trunk_blocks___23___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=285](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=286](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=287](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=288](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___23___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:clone[name=clone]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(clone)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 720, in _pos_embed\n return self.pos_drop(x)\n\n```\n## Return values\n", "text":"FX Node: placeholder:clone[name=clone]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=8](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=9](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=10](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=11](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=12](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:clone[name=clone]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(clone)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 828, in forward_features\n x = self._pos_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 720, in _pos_embed\n return self.pos_drop(x)\n\n```\n## Return values\n", "text":"FX Node: placeholder:clone[name=clone]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_weight[name=p_getattr_l__self___trunk_blocks___0___norm1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm1_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm1_bias[name=p_getattr_l__self___trunk_blocks___0___norm1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.native_layer_norm.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::native_layer_norm.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.native_layer_norm.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.native_layer_norm.default. \nONNX Node: aten_native_layer_norm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::native_layer_norm.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=5](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\nList[length=1](\n1024,\n),\n`TorchScriptTensor(f32[1024])`,\n`TorchScriptTensor(f32[1024])`,\n1e-06,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_native_layer_norm)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_native_layer_norm)`", "text":"FX Node: aten.native_layer_norm.default. \nONNX Node: aten_native_layer_norm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.native_layer_norm.default[name=native_layer_norm]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.native_layer_norm.default[name=native_layer_norm]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:[name=getitem]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\nnative_layer_norm: Tuple[length=3](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n),\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: call_function:[name=getitem]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\nnative_layer_norm: Tuple[length=3](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n),\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_normalization_LayerNorm. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_normalization_LayerNorm. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___norm1_1[name=getattr_l__self___trunk_blocks___0___norm1_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___norm1_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=13](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___norm1_1[name=getattr_l__self___trunk_blocks___0___norm1_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:getitem[name=getitem]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getitem)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: placeholder:getitem[name=getitem]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:getitem[name=getitem]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getitem)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: placeholder:getitem[name=getitem]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_weight)[placeholder]:Tensor(f32[3072, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_weight[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_qkv_bias)[placeholder]:Tensor(f32[3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_qkv_bias[name=p_getattr_l__self___trunk_blocks___0___attn_qkv_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\nList[length=2](\n4096,\n1024,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.t.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::t.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.t.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.t.default. \nONNX Node: aten_t[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::t.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[3072, 1024])`,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_t)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_t)`", "text":"FX Node: aten.t.default. \nONNX Node: aten_t[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.t.default[name=t]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\nview_1: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.t.default[name=t]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.addmm.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::addmm.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.addmm.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.addmm.default. \nONNX Node: aten_addmm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::addmm.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=3](\n`TorchScriptTensor(f32[3072])`,\n`TorchScriptTensor(f32[4096, 1024])`,\n`TorchScriptTensor(f32[1024, 3072])`,\n)\n- onnx_kwargs: Dict[length=2](\nbeta: 1,\nalpha: 1,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_addmm)`\n### Failed: attribute 'beta' type mismatch!\nActual vs\nExpected AttrType.FLOAT\n### Failed: attribute 'alpha' type mismatch!\nActual vs\nExpected AttrType.FLOAT\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`TracedOnnxFunction(aten_addmm)`", "text":"FX Node: aten.addmm.default. \nONNX Node: aten_addmm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.addmm.default[name=addmm]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\nview_1: `TorchScriptTensor(f32[4096, 1024])`,\nt: `TorchScriptTensor(f32[1024, 3072])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.addmm.default[name=addmm]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4096, 3072])`,\nList[length=3](\n4,\n1024,\n3072,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_2]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3072])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\nview_1: `TorchScriptTensor(f32[4096, 1024])`,\nt: `TorchScriptTensor(f32[1024, 3072])`,\naddmm: `TorchScriptTensor(f32[4096, 3072])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_2]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\nview_1: `TorchScriptTensor(f32[4096, 1024])`,\nt: `TorchScriptTensor(f32[1024, 3072])`,\naddmm: `TorchScriptTensor(f32[4096, 3072])`,\nview_2: `TorchScriptTensor(f32[4, 1024, 3072])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_linear_Linear. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_linear_Linear. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_qkv_1[name=getattr_l__self___trunk_blocks___0___attn_qkv_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___attn_qkv_1)[call_module]:Tensor(f32[4, 1024, 3072])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_qkv_1[name=getattr_l__self___trunk_blocks___0___attn_qkv_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 3072])`,\nList[length=5](\n4,\n1024,\n3,\n16,\n64,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_3]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 3, 16, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_3]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.permute.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::permute.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.permute.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.permute.default. \nONNX Node: aten_permute[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::permute.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\nList[length=5](\n2,\n0,\n3,\n1,\n4,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_permute)`\nmatch score: 0\n## Return values\n`TracedOnnxFunction(aten_permute)`", "text":"FX Node: aten.permute.default. \nONNX Node: aten_permute[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.permute.default[name=permute]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.permute.default)[call_function]:Tensor(f32[3, 4, 16, 1024, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.permute.default[name=permute]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.unbind.int' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::unbind.int, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.unbind.int' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.unbind.int. \nONNX Node: aten_unbind[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::unbind.int, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\n)\n- onnx_kwargs: Dict[length=1](\ndim: 0,\n)\n- diagnostic_context: \n## Checking perfect match...\n`OnnxFunction(aten_unbind)`\nmatch score: 1\n## Return values\n`OnnxFunction(aten_unbind)`", "text":"FX Node: aten.unbind.int. \nONNX Node: aten_unbind[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.unbind.int[name=unbind]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.unbind.int)[call_function]:List[length=3](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 1024, 64]),\n)\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=8](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 89, in forward\n q, k, v = qkv.unbind(0)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.unbind.int[name=unbind]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: '' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::getitem.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: '' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: . \nONNX Node: aten_getitem[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::getitem.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor()`,\n0,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`OnnxFunction(aten_getitem)`\n### Failed: input type mismatch for input 'self'!\nActual set() vs\nExpected {'seq(tensor(int8))', 'seq(tensor(complex64))', 'seq(tensor(complex128))', 'seq(tensor(int32))', 'seq(tensor(bfloat16))', 'seq(tensor(float))', 'seq(tensor(int64))', 'seq(tensor(int16))', 'seq(tensor(bool))', 'seq(tensor(uint8))', 'seq(tensor(float16))', 'seq(tensor(double))'}\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`OnnxFunction(aten_getitem)`", "text":"FX Node: . \nONNX Node: aten_getitem[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:[name=getitem_3]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=9](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 89, in forward\n q, k, v = qkv.unbind(0)\n\n```\n## Return values\n", "text":"FX Node: call_function:[name=getitem_3]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: '' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::getitem.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: '' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: . \nONNX Node: aten_getitem[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::getitem.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor()`,\n1,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`OnnxFunction(aten_getitem)`\n### Failed: input type mismatch for input 'self'!\nActual set() vs\nExpected {'seq(tensor(int8))', 'seq(tensor(complex64))', 'seq(tensor(complex128))', 'seq(tensor(int32))', 'seq(tensor(bfloat16))', 'seq(tensor(float))', 'seq(tensor(int64))', 'seq(tensor(int16))', 'seq(tensor(bool))', 'seq(tensor(uint8))', 'seq(tensor(float16))', 'seq(tensor(double))'}\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`OnnxFunction(aten_getitem)`", "text":"FX Node: . \nONNX Node: aten_getitem[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:[name=getitem_4]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=10](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 89, in forward\n q, k, v = qkv.unbind(0)\n\n```\n## Return values\n", "text":"FX Node: call_function:[name=getitem_4]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: '' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::getitem.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: '' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: . \nONNX Node: aten_getitem[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::getitem.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor()`,\n2,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`OnnxFunction(aten_getitem)`\n### Failed: input type mismatch for input 'self'!\nActual set() vs\nExpected {'seq(tensor(int8))', 'seq(tensor(complex64))', 'seq(tensor(complex128))', 'seq(tensor(int32))', 'seq(tensor(bfloat16))', 'seq(tensor(float))', 'seq(tensor(int64))', 'seq(tensor(int16))', 'seq(tensor(bool))', 'seq(tensor(uint8))', 'seq(tensor(float16))', 'seq(tensor(double))'}\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`OnnxFunction(aten_getitem)`", "text":"FX Node: . \nONNX Node: aten_getitem[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:[name=getitem_5]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=11](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 89, in forward\n q, k, v = qkv.unbind(0)\n\n```\n## Return values\n", "text":"FX Node: call_function:[name=getitem_5]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten._scaled_dot_product_efficient_attention.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::_scaled_dot_product_efficient_attention.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten._scaled_dot_product_efficient_attention.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten._scaled_dot_product_efficient_attention.default. \nONNX Node: aten__scaled_dot_product_efficient_attention[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::_scaled_dot_product_efficient_attention.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=5](\n`TorchScriptTensor(f32[4, 16, 1024, 64])`,\n`TorchScriptTensor(f32[4, 16, 1024, 64])`,\n`TorchScriptTensor(f32[4, 16, 1024, 64])`,\n,\nFalse,\n)\n- onnx_kwargs: Dict[length=3](\ndropout_p: 0.0,\nis_causal: False,\nscale: ,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten__scaled_dot_product_efficient_attention)`\n### Failed: attribute 'scale' type mismatch!\nActual vs\nExpected AttrType.FLOAT\nmatch score: 2\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`TracedOnnxFunction(aten__scaled_dot_product_efficient_attention)`", "text":"FX Node: aten._scaled_dot_product_efficient_attention.default. \nONNX Node: aten__scaled_dot_product_efficient_attention[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten._scaled_dot_product_efficient_attention.default[name=_scaled_dot_product_efficient_attention]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten._scaled_dot_product_efficient_attention.default)[call_function]:Tuple[length=4](\nTensor(f32[4, 16, 1024, 64]),\nTensor(f32[4, 16, 0]),\nTensor(i64[]),\nTensor(i64[]),\n)\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=12](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 93, in forward\n x = F.scaled_dot_product_attention(\n\n```\n## Return values\n", "text":"FX Node: call_function:aten._scaled_dot_product_efficient_attention.default[name=_scaled_dot_product_efficient_attention]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:[name=getitem_6]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 16, 1024, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=13](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 93, in forward\n x = F.scaled_dot_product_attention(\n\n```\n## Return values\n", "text":"FX Node: call_function:[name=getitem_6]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.transpose.int' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::transpose.int, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.transpose.int' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.transpose.int. \nONNX Node: aten_transpose[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::transpose.int, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=3](\n`TorchScriptTensor(f32[4, 16, 1024, 64])`,\n1,\n2,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_transpose)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_transpose)`", "text":"FX Node: aten.transpose.int. \nONNX Node: aten_transpose[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.transpose.int[name=transpose_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.transpose.int)[call_function]:Tensor(f32[4, 1024, 16, 64])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=14](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 104, in forward\n x = x.transpose(1, 2).reshape(B, N, C)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.transpose.int[name=transpose_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 16, 64])`,\nList[length=3](\n4,\n1024,\n1024,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_4]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=15](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 104, in forward\n x = x.transpose(1, 2).reshape(B, N, C)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_4]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:view_4[name=view_4]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(view_4)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 104, in forward\n x = x.transpose(1, 2).reshape(B, N, C)\n\n```\n## Return values\n", "text":"FX Node: placeholder:view_4[name=view_4]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_weight)[placeholder]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_weight[name=p_getattr_l__self___trunk_blocks___0___attn_proj_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___attn_proj_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___attn_proj_bias[name=p_getattr_l__self___trunk_blocks___0___attn_proj_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\nList[length=2](\n4096,\n1024,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_5]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 105, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_5]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.t.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::t.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.t.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.t.default. \nONNX Node: aten_t[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::t.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[1024, 1024])`,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_t)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_t)`", "text":"FX Node: aten.t.default. \nONNX Node: aten_t[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.t.default[name=t_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\nview_5: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 105, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.t.default[name=t_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.addmm.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::addmm.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.addmm.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.addmm.default. \nONNX Node: aten_addmm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::addmm.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=3](\n`TorchScriptTensor(f32[1024])`,\n`TorchScriptTensor(f32[4096, 1024])`,\n`TorchScriptTensor(f32[1024, 1024])`,\n)\n- onnx_kwargs: Dict[length=2](\nbeta: 1,\nalpha: 1,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_addmm)`\n### Failed: attribute 'beta' type mismatch!\nActual vs\nExpected AttrType.FLOAT\n### Failed: attribute 'alpha' type mismatch!\nActual vs\nExpected AttrType.FLOAT\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`TracedOnnxFunction(aten_addmm)`", "text":"FX Node: aten.addmm.default. \nONNX Node: aten_addmm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.addmm.default[name=addmm_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\nview_5: `TorchScriptTensor(f32[4096, 1024])`,\nt_1: `TorchScriptTensor(f32[1024, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 105, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.addmm.default[name=addmm_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4096, 1024])`,\nList[length=3](\n4,\n1024,\n1024,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_6]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\nview_5: `TorchScriptTensor(f32[4096, 1024])`,\nt_1: `TorchScriptTensor(f32[1024, 1024])`,\naddmm_1: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 105, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_6]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\nview_4: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\nview_5: `TorchScriptTensor(f32[4096, 1024])`,\nt_1: `TorchScriptTensor(f32[1024, 1024])`,\naddmm_1: `TorchScriptTensor(f32[4096, 1024])`,\nview_6: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_linear_Linear. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_linear_Linear. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_proj_1[name=getattr_l__self___trunk_blocks___0___attn_proj_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___attn_proj_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=16](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 105, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_proj_1[name=getattr_l__self___trunk_blocks___0___attn_proj_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:view_6[name=view_6]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(view_6)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 105, in forward\n x = self.proj(x)\n\n```\n## Return values\n", "text":"FX Node: placeholder:view_6[name=view_6]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.clone.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::clone.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.clone.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.clone.default. \nONNX Node: aten_clone[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::clone.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_clone)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_clone)`", "text":"FX Node: aten.clone.default. \nONNX Node: aten_clone[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.clone.default[name=clone_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.clone.default)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nview_6: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 106, in forward\n x = self.proj_drop(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.clone.default[name=clone_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nview_6: `TorchScriptTensor(f32[4, 1024, 1024])`,\nclone_1: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_dropout_Dropout. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_dropout_Dropout)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_dropout_Dropout. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_proj_drop_1[name=getattr_l__self___trunk_blocks___0___attn_proj_drop_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___attn_proj_drop_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=17](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 106, in forward\n x = self.proj_drop(x)\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_proj_drop_1[name=getattr_l__self___trunk_blocks___0___attn_proj_drop_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=18](\ngetitem: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___attn_qkv_1: `TorchScriptTensor(f32[4, 1024, 3072])`,\nview_3: `TorchScriptTensor(f32[4, 1024, 3, 16, 64])`,\npermute: `TorchScriptTensor(f32[3, 4, 16, 1024, 64])`,\nunbind: `TorchScriptTensor()`,\ngetitem_3: `TorchScriptTensor(f32[4, 16, 1024, 64])`,\n...\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: timm_models_vision_transformer_Attention. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Attention)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: timm_models_vision_transformer_Attention. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_1[name=getattr_l__self___trunk_blocks___0___attn_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___attn_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=14](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 88, in forward\n qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___attn_1[name=getattr_l__self___trunk_blocks___0___attn_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.add.Tensor' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\n- diagnostic_context: \n## Return values\nList[length=2](\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.add.Tensor' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.add.Tensor. \nONNX Node: aten_add[opset=pkg.onnxscript.torch_lib;is_custom=False]. \nONNX Node: aten_logical_or[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\n- default_and_custom_functions: List[length=2](\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\nregistration.ONNXFunction(aten::add.Tensor, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n- onnx_kwargs: Dict[length=1](\nalpha: 1,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_logical_or)`\n### Failed: attribute mismatch!\nActual {'alpha'} vs expected set()\nThe function is not a nearest match candidate.\n## Checking perfect match...\n`TracedOnnxFunction(aten_add)`\n### Failed: attribute 'alpha' type mismatch!\nActual vs\nExpected AttrType.FLOAT\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`TracedOnnxFunction(aten_add)`", "text":"FX Node: aten.add.Tensor. \nONNX Node: aten_add[opset=pkg.onnxscript.torch_lib;is_custom=False]. \nONNX Node: aten_logical_or[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.add.Tensor[name=add_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.add.Tensor)[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=15](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.add.Tensor[name=add_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:add_1[name=add_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(add_1)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Return values\n", "text":"FX Node: placeholder:add_1[name=add_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_weight)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nadd_1: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_weight[name=p_getattr_l__self___trunk_blocks___0___norm2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___norm2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\nadd_1: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___norm2_bias[name=p_getattr_l__self___trunk_blocks___0___norm2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.native_layer_norm.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::native_layer_norm.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.native_layer_norm.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.native_layer_norm.default. \nONNX Node: aten_native_layer_norm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::native_layer_norm.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=5](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\nList[length=1](\n1024,\n),\n`TorchScriptTensor(f32[1024])`,\n`TorchScriptTensor(f32[1024])`,\n1e-06,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_native_layer_norm)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_native_layer_norm)`", "text":"FX Node: aten.native_layer_norm.default. \nONNX Node: aten_native_layer_norm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.native_layer_norm.default[name=native_layer_norm_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.native_layer_norm.default)[call_function]:Tuple[length=3](\nTensor(f32[4, 1024, 1024]),\nTensor(f32[4, 1024, 1]),\nTensor(f32[4, 1024, 1]),\n)\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\nadd_1: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.native_layer_norm.default[name=native_layer_norm_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:[name=getitem_10]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node()[call_function]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\nadd_1: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\nnative_layer_norm_1: Tuple[length=3](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n),\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n\n```\n## Return values\n", "text":"FX Node: call_function:[name=getitem_10]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\nadd_1: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\nnative_layer_norm_1: Tuple[length=3](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n`TorchScriptTensor(f32[4, 1024, 1])`,\n),\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_normalization_LayerNorm. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_normalization_LayerNorm)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_normalization_LayerNorm. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___norm2_1[name=getattr_l__self___trunk_blocks___0___norm2_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___norm2_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=16](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___norm2_1[name=getattr_l__self___trunk_blocks___0___norm2_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:getitem_10[name=getitem_10]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getitem_10)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n\n```\n## Return values\n", "text":"FX Node: placeholder:getitem_10[name=getitem_10]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight)[placeholder]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias)[placeholder]:Tensor(f32[1024])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc2_weight: `TorchScriptTensor(f32[1024, 4096])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc2_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:getitem_10[name=getitem_10]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getitem_10)[placeholder]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n\n```\n## Return values\n", "text":"FX Node: placeholder:getitem_10[name=getitem_10]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight)[placeholder]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_weight]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias)[placeholder]:Tensor(f32[4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=2](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## Return values\n", "text":"FX Node: placeholder:p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias[name=p_getattr_l__self___trunk_blocks___0___mlp_fc1_bias]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4, 1024, 1024])`,\nList[length=2](\n4096,\n1024,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_7]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4096, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=3](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_7]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.t.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::t.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.t.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.t.default. \nONNX Node: aten_t[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::t.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[4096, 1024])`,\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_t)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_t)`", "text":"FX Node: aten.t.default. \nONNX Node: aten_t[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.t.default[name=t_2]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.t.default)[call_function]:Tensor(f32[1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=4](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\nview_7: `TorchScriptTensor(f32[4096, 1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.t.default[name=t_2]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.addmm.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::addmm.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.addmm.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.addmm.default. \nONNX Node: aten_addmm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::addmm.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=3](\n`TorchScriptTensor(f32[4096])`,\n`TorchScriptTensor(f32[4096, 1024])`,\n`TorchScriptTensor(f32[1024, 4096])`,\n)\n- onnx_kwargs: Dict[length=2](\nbeta: 1,\nalpha: 1,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_addmm)`\n### Failed: attribute 'beta' type mismatch!\nActual vs\nExpected AttrType.FLOAT\n### Failed: attribute 'alpha' type mismatch!\nActual vs\nExpected AttrType.FLOAT\nmatch score: 1\n### Exact match is not found!\nCannot find a perfect match of symbolic overload, a nearest match is found. Please check the ONNX output carefully. \n\n## Return values\n`TracedOnnxFunction(aten_addmm)`", "text":"FX Node: aten.addmm.default. \nONNX Node: aten_addmm[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"warning", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.addmm.default[name=addmm_2]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.addmm.default)[call_function]:Tensor(f32[4096, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\nview_7: `TorchScriptTensor(f32[4096, 1024])`,\nt_2: `TorchScriptTensor(f32[1024, 4096])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.addmm.default[name=addmm_2]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.view.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.view.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::view.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=2](\n`TorchScriptTensor(f32[4096, 4096])`,\nList[length=3](\n4,\n1024,\n4096,\n),\n)\n- onnx_kwargs: Dict[length=0](\nNone)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_view)`\nmatch score: 2\n## Return values\n`TracedOnnxFunction(aten_view)`", "text":"FX Node: aten.view.default. \nONNX Node: aten_view[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.view.default[name=view_8]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.view.default)[call_function]:Tensor(f32[4, 1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\nview_7: `TorchScriptTensor(f32[4096, 1024])`,\nt_2: `TorchScriptTensor(f32[1024, 4096])`,\naddmm_2: `TorchScriptTensor(f32[4096, 4096])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Return values\n", "text":"FX Node: call_function:aten.view.default[name=view_8]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: output:output[name=output]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(output)[output]:None\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=7](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\nview_7: `TorchScriptTensor(f32[4096, 1024])`,\nt_2: `TorchScriptTensor(f32[1024, 4096])`,\naddmm_2: `TorchScriptTensor(f32[4096, 4096])`,\nview_8: `TorchScriptTensor(f32[4, 1024, 4096])`,\n)\n## Return values\n", "text":"FX Node: output:output[name=output]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_linear_Linear. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_linear_Linear)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Return values\n", "text":"FX Graph: torch_nn_modules_linear_Linear. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___mlp_fc1_1[name=getattr_l__self___trunk_blocks___0___mlp_fc1_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___mlp_fc1_1)[call_module]:Tensor(f32[4, 1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=5](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc2_weight: `TorchScriptTensor(f32[1024, 4096])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc2_bias: `TorchScriptTensor(f32[1024])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Return values\n", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___mlp_fc1_1[name=getattr_l__self___trunk_blocks___0___mlp_fc1_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Node: placeholder:view_8[name=view_8]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(view_8)[placeholder]:Tensor(f32[4, 1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_activations_GELUTanh)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=0](\nNone)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Return values\n", "text":"FX Node: placeholder:view_8[name=view_8]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"Searching operator overload: 'aten.gelu.default' in onnx registry...\n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher.get_function_overloads\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\n- diagnostic_context: \n## Return values\nList[length=1](\nregistration.ONNXFunction(aten::gelu.default, is_custom=False, is_complex=False),\n)", "text":"Searching operator overload: 'aten.gelu.default' in onnx registry...\n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher.get_function_overloads" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":353 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0016", "stacks":[] }, { "message":{ "markdown":"FX Node: aten.gelu.default. \nONNX Node: aten_gelu[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n\n\n## Additional Message:\n\n## Function Signature\n### Function Signature OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\n- default_and_custom_functions: List[length=1](\nregistration.ONNXFunction(aten::gelu.default, is_custom=False, is_complex=False),\n)\n- onnx_args: Tuple[length=1](\n`TorchScriptTensor(f32[4, 1024, 4096])`,\n)\n- onnx_kwargs: Dict[length=1](\napproximate: tanh,\n)\n- diagnostic_context: \n## Checking perfect match...\n`TracedOnnxFunction(aten_gelu)`\nmatch score: 1\n## Return values\n`TracedOnnxFunction(aten_gelu)`", "text":"FX Node: aten.gelu.default. \nONNX Node: aten_gelu[opset=pkg.onnxscript.torch_lib;is_custom=False]. \n" }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"informational", "level":"none", "locations":[ { "message":{ "text":"OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":199 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0014", "stacks":[] }, { "message":{ "markdown":"FX Node: call_function:aten.gelu.default[name=gelu]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(aten.gelu.default)[call_function]:Tensor(f32[4, 1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_activations_GELUTanh)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=1](\nview_8: `TorchScriptTensor(f32[4, 1024, 4096])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 45, in forward\n x = self.act(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/activations.py\", line 159, in forward\n return F.gelu(input, approximate='tanh')\n\n```\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Node: call_function:aten.gelu.default[name=gelu]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: timm_layers_activations_GELUTanh. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(timm_layers_activations_GELUTanh)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Graph: timm_layers_activations_GELUTanh. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___mlp_act_1[name=getattr_l__self___trunk_blocks___0___mlp_act_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___mlp_act_1)[call_module]:Tensor(f32[4, 1024, 4096])\n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=6](\ngetitem_10: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_bias: `TorchScriptTensor(f32[4096])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc2_weight: `TorchScriptTensor(f32[1024, 4096])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc2_bias: `TorchScriptTensor(f32[1024])`,\ngetattr_l__self___trunk_blocks___0___mlp_fc1_1: `TorchScriptTensor(f32[4, 1024, 4096])`,\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 45, in forward\n x = self.act(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/activations.py\", line 159, in forward\n return F.gelu(input, approximate='tanh')\n\n```\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___mlp_act_1[name=getattr_l__self___trunk_blocks___0___mlp_act_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: timm_layers_mlp_Mlp. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(timm_layers_mlp_Mlp)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Graph: timm_layers_mlp_Mlp. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:getattr_L__self___trunk_blocks___0___mlp_1[name=getattr_l__self___trunk_blocks___0___mlp_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(getattr_L__self___trunk_blocks___0___mlp_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=17](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 170, in forward\n x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/mlp.py\", line 44, in forward\n x = self.fc1(x)\n\n```\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Node: call_module:getattr_L__self___trunk_blocks___0___mlp_1[name=getattr_l__self___trunk_blocks___0___mlp_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: timm_models_vision_transformer_Block. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_Block)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Graph: timm_models_vision_transformer_Block. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:trunk_blocks_0_1[name=trunk_blocks_0_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(trunk_blocks_0_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=289](\nclone: `TorchScriptTensor(f32[4, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm2_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___mlp_fc1_weight: `TorchScriptTensor(f32[4096, 1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Node: call_module:trunk_blocks_0_1[name=trunk_blocks_0_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: torch_nn_modules_container_Sequential. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(torch_nn_modules_container_Sequential)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Graph: torch_nn_modules_container_Sequential. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:trunk_blocks_1[name=trunk_blocks_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(trunk_blocks_1)[call_module]:Tensor(f32[4, 1024, 1024])\n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=310](\nx: `TorchScriptTensor(f32[4, 3, 512, 512])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 834, in forward_features\n x = self.blocks(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 169, in forward\n x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x))))\n\n```\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Node: call_module:trunk_blocks_1[name=trunk_blocks_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: timm_models_vision_transformer_VisionTransformer. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule(timm_models_vision_transformer_VisionTransformer)\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- parent_onnxscript_graph: \n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Graph: timm_models_vision_transformer_VisionTransformer. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] }, { "message":{ "markdown":"FX Node: call_module:trunk_1[name=trunk_1]. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run_node\n- self: \n- node: fx.Node(trunk_1)[call_module]:Tensor(f32[4, 1024])\n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n- onnxscript_graph: \n- onnxscript_tracer: \n- fx_name_to_onnxscript_value: Dict[length=307](\np_trunk_pos_embed: `TorchScriptTensor(f32[1, 1024, 1024])`,\np_trunk_attn_pool_latent: `TorchScriptTensor(f32[1, 1, 1024])`,\np_trunk_patch_embed_proj_weight: `TorchScriptTensor(f32[1024, 3, 16, 16])`,\np_trunk_patch_embed_proj_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_weight: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___norm1_bias: `TorchScriptTensor(f32[1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_weight: `TorchScriptTensor(f32[3072, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_qkv_bias: `TorchScriptTensor(f32[3072])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_weight: `TorchScriptTensor(f32[1024, 1024])`,\np_getattr_l__self___trunk_blocks___0___attn_proj_bias: `TorchScriptTensor(f32[1024])`,\n...\n)\n## PyTorch source information\n```\n File \"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py\", line 196, in forward\n x = self.trunk(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 853, in forward\n x = self.forward_features(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/models/vision_transformer.py\", line 827, in forward_features\n x = self.patch_embed(x)\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1725, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/thebears/.local/lib/python3.10/site-packages/timm/layers/patch_embed.py\", line 131, in forward\n x = self.proj(x)\n\n```\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Node: call_module:trunk_1[name=trunk_1]. " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run_node" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":414 } } }, { "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/open_clip/timm_model.py" }, "region":{ "snippet":{ "text":"x = self.trunk(x)" }, "startLine":196 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0008", "stacks":[] }, { "message":{ "markdown":"FX Graph: . \n\n## Additional Message:\n\n## Function Signature\n### Function Signature FxOnnxInterpreter.run\n- self: \n- fx_graph_module: torch.fx.GraphModule()\n- onnxfunction_dispatcher: \n- op_level_debug: False\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 482, in run_node\n self.call_module(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 811, in call_module\n sub_onnxscript_graph = self.run(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 577, in run\n self.run_node(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 152, in wrapper\n ctx.log_and_raise_if_error(diag)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py\", line 369, in log_and_raise_if_error\n raise diagnostic.source_exception\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 136, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 471, in run_node\n self.call_function(\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py\", line 703, in call_function\n ] = symbolic_fn(*onnx_args, **onnx_kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/values.py\", line 625, in __call__\n return self.func(*args, **kwargs)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 474, in aten_gelu\n result = _aten_gelu_approximate_tanh(self)\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnxscript/function_libs/torch_lib/ops/nn.py\", line 496, in _aten_gelu_approximate_tanh\n cubed = op.Pow(self, ir.tensor(3, dtype=self.dtype))\n\n File \"/home/thebears/.local/lib/python3.10/site-packages/onnx_ir/_convenience/_constructors.py\", line 103, in tensor\n raise TypeError(f\"dtype must be an instance of DataType. dtype={dtype}\")\n\nTypeError: dtype must be an instance of DataType. dtype=torch.float32\n\n```", "text":"FX Graph: . " }, "codeFlows":[ { "threadFlows":[ { "locations":[] } ] } ], "graphs":[], "kind":"fail", "level":"error", "locations":[ { "message":{ "text":"FxOnnxInterpreter.run" }, "physicalLocation":{ "artifactLocation":{ "uri":"/home/thebears/.local/lib/python3.10/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py" }, "region":{ "snippet":{ "text":"@_beartype.beartype" }, "startLine":496 } } } ], "properties":{ "tags":[] }, "ruleId":"FXE0007", "stacks":[] } ] } ], "version":"2.1.0", "schemaUri":"https://docs.oasis-open.org/sarif/sarif/v2.1.0/cs01/schemas/sarif-schema-2.1.0.json" }