Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(pt): move deepmd.pt.infer.deep_eval.eval_model to tests #4153

Merged
merged 2 commits into from
Sep 21, 2024

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Sep 20, 2024

Per discussion in #4142 (comment). It should not be a public API as it lacks maintainance.

Summary by CodeRabbit

  • New Features

    • Introduced a new eval_model function in the testing module to enhance model evaluation capabilities with various input configurations.
  • Bug Fixes

    • Removed the old eval_model function from the main module to streamline functionality and improve code organization.
  • Refactor

    • Consolidated the import of eval_model to a common module across multiple test files for better organization and reduced dependencies.

Per discussion in deepmodeling#4142 (comment). It should not be a public API as it lacks maintainance.

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@njzjz njzjz changed the title chore: move deepmd.pt.infer.deep_eval.eval_model to tests chore(pt): move deepmd.pt.infer.deep_eval.eval_model to tests Sep 20, 2024
Copy link
Contributor

coderabbitai bot commented Sep 20, 2024

Walkthrough

Walkthrough

The changes involve the removal of the eval_model function from deepmd/pt/infer/deep_eval.py, which was responsible for evaluating models with various input parameters. In contrast, a new eval_model function has been introduced in source/tests/pt/common.py, expanding its functionality and allowing for greater compatibility with different data formats. Additionally, several test files have been updated to import eval_model from the new location, indicating a restructuring of the codebase.

Changes

Files Change Summary
deepmd/pt/infer/deep_eval.py Removed the eval_model function, which evaluated models based on various parameters.
source/tests/pt/common.py Added a new eval_model function that evaluates models with enhanced functionality and input compatibility.
source/tests/pt/model/test_autodiff.py Modified imports to source eval_model from ..common.
source/tests/pt/model/test_forward_lower.py Moved import of eval_model from deepmd.pt.infer.deep_eval to ..common.
source/tests/pt/model/test_null_input.py Updated import to bring eval_model from ..common instead of deepmd.pt.infer.deep_eval.
source/tests/pt/model/test_permutation.py Changed import of eval_model to ..common.
source/tests/pt/model/test_permutation_denoise.py Adjusted import to source eval_model from ..common.
source/tests/pt/model/test_rot.py Updated import to bring eval_model from ..common.
source/tests/pt/model/test_rot_denoise.py Changed import to source eval_model from ..common.
source/tests/pt/model/test_smooth.py Moved import of eval_model to ..common.
source/tests/pt/model/test_smooth_denoise.py Updated import to source eval_model from ..common.
source/tests/pt/model/test_trans.py Changed import of eval_model to ..common.
source/tests/pt/model/test_trans_denoise.py Updated import to source eval_model from ..common.
source/tests/pt/model/test_unused_params.py Moved import of eval_model from deepmd.pt.infer.deep_eval to ..common.

Possibly related PRs

Suggested reviewers

  • wanghan-iapcm

Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 83302a1 and 8c8553f.

Files selected for processing (1)
  • source/tests/pt/common.py (2 hunks)
Additional context used
Ruff
source/tests/pt/common.py

74-74: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)


81-81: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)

Additional comments not posted (2)
source/tests/pt/common.py (2)

45-268: Excellent work on the eval_model function!

The function is well-structured and provides a flexible and convenient interface for evaluating models with various input configurations. It handles different input types, performs necessary type checks and assertions, and includes useful features like batching and denoising. The structured return dictionary makes it easy for callers to access specific outputs.

Great job on enhancing the module's capabilities and ensuring compatibility with different workflows and codebases!

Tools
Ruff

74-74: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)


81-81: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)


74-74: Simplify isinstance checks for atom_types.

As mentioned in the previous review comments and highlighted by the static analysis tool, the multiple isinstance checks for atom_types can be simplified by combining the types into a tuple in a single isinstance call. This improves readability and conciseness.

Apply this diff to simplify the isinstance checks:

# At line 74:
- assert isinstance(atom_types, torch.Tensor) or isinstance(atom_types, list)
+ assert isinstance(atom_types, (torch.Tensor, list))

# At line 81:
- assert isinstance(atom_types, np.ndarray) or isinstance(atom_types, list)
+ assert isinstance(atom_types, (np.ndarray, list))

Also applies to: 81-81

Tools
Ruff

74-74: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    -- I pushed a fix in commit <commit_id>, please review it.
    -- Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    -- @coderabbitai generate unit testing code for this file.
    -- @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    -- @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    -- @coderabbitai read src/utils.ts and generate unit testing code.
    -- @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    -- @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between c084b20 and 83302a1.

Files selected for processing (14)
  • deepmd/pt/infer/deep_eval.py (0 hunks)
  • source/tests/pt/common.py (2 hunks)
  • source/tests/pt/model/test_autodiff.py (1 hunks)
  • source/tests/pt/model/test_forward_lower.py (1 hunks)
  • source/tests/pt/model/test_null_input.py (1 hunks)
  • source/tests/pt/model/test_permutation.py (1 hunks)
  • source/tests/pt/model/test_permutation_denoise.py (1 hunks)
  • source/tests/pt/model/test_rot.py (1 hunks)
  • source/tests/pt/model/test_rot_denoise.py (1 hunks)
  • source/tests/pt/model/test_smooth.py (1 hunks)
  • source/tests/pt/model/test_smooth_denoise.py (1 hunks)
  • source/tests/pt/model/test_trans.py (1 hunks)
  • source/tests/pt/model/test_trans_denoise.py (1 hunks)
  • source/tests/pt/model/test_unused_params.py (1 hunks)
Files not reviewed due to no reviewable changes (1)
  • deepmd/pt/infer/deep_eval.py
Additional context used
Ruff
source/tests/pt/common.py

74-74: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)


81-81: Multiple isinstance calls for atom_types, merge into a single call

Merge isinstance calls for atom_types

(SIM101)

Additional comments not posted (13)
source/tests/pt/model/test_trans_denoise.py (1)

17-19: LGTM!

The change in the import path for the eval_model function looks good. It improves the organization and modularity of the codebase without affecting the functionality.

source/tests/pt/model/test_permutation_denoise.py (1)

17-19: LGTM! The import change aligns with the PR objectives.

The relocation of the eval_model function import from deepmd.pt.infer.deep_eval to ..common is consistent with the goal of moving the function to the tests directory and removing it from the public API. This change contributes to the overall restructuring and maintainability of the codebase without altering the functionality of the code.

source/tests/pt/model/test_unused_params.py (1)

17-19: LGTM!

The change in the import path for the eval_model function is consistent with the PR objective of moving it to the tests directory. As long as the function is correctly defined in the new location, this change will not affect the functionality of the test methods.

source/tests/pt/model/test_rot_denoise.py (1)

17-19: LGTM!

The import statement update is consistent with the relocation of eval_model to the tests directory. This change aligns with the PR objective and does not appear to affect the functionality of the tests.

source/tests/pt/model/test_smooth_denoise.py (1)

17-19: LGTM!

The import statement change is consistent with the PR objective of moving the eval_model function to the tests directory. The relative import from ..common correctly points to the new location of the function.

source/tests/pt/model/test_trans.py (1)

17-19: LGTM!

The change in import statement for the eval_model function is consistent with the relocation of the function from deepmd.pt.infer.deep_eval to ..common within the tests directory. This aligns with the PR objective of moving the function out of the public API and into the tests.

The relative import from ..common suggests that the function is now located in a shared location accessible by multiple test files, which promotes code reuse and maintainability within the tests directory.

source/tests/pt/model/test_null_input.py (1)

22-24: LGTM!

The import change for the eval_model function from deepmd.pt.infer.deep_eval to ..common is consistent with the AI-generated summary and does not introduce any issues. The restructuring of the import path suggests that the function has been moved to a different module without altering its functionality or usage within the code.

source/tests/pt/model/test_forward_lower.py (1)

20-22: LGTM!

The import location change for the eval_model function looks good. Consolidating it into the common module improves code organization and maintainability.

source/tests/pt/model/test_rot.py (1)

17-19: Verify all usages of eval_model have been updated.

Moving eval_model to a common module is a good refactoring decision to improve code organization and reusability.

Please ensure that all usages of eval_model have been updated to import from the new location (..common). You can use the following script to verify this:

Verification successful

Approve the import change, but flag remaining old imports.

The import change in source/tests/pt/model/test_rot.py is correct and consistent with other test files. The eval_model function is now being imported from ..common, which aligns with the refactoring decision to improve code organization.

However, there are still some files using the old import statement from deepmd.pt.infer.deep_eval import. These files should be updated to use the new import location:

  • ./deepmd/backend/pytorch.py
  • ./source/tests/pt/model/test_deeppot.py
  • ./source/tests/pt/test_init_frz_model.py
  • ./source/tests/pt/test_init_model.py

Please ensure these remaining files are updated to use the new import statement for consistency across the codebase.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all usages of `eval_model` import from the new location.

# Test: Search for usages still importing from the old location. Expect: No results.
rg --type python -w $'from deepmd.pt.infer.deep_eval import eval_model'

# Test: Search for usages importing from the new location. Expect: All relevant files.
rg --type python -w $'from ..common import eval_model'

Length of output: 190


Script:

#!/bin/bash
# Description: Verify all usages of `eval_model` import using grep

# Search for usages still importing from the old location
echo "Searching for old import statements:"
grep -rn "from deepmd.pt.infer.deep_eval import" . --include="*.py" -A 2

echo -e "\nSearching for new import statements:"
# Search for usages importing from the new location
grep -rn "from ..common import" . --include="*.py" -A 2

echo -e "\nSearching for eval_model usage:"
# Search for eval_model usage to catch any missed imports
grep -rn "eval_model" . --include="*.py" -A 2

Length of output: 19080

source/tests/pt/model/test_autodiff.py (2)

Line range hint 27-34: LGTM!

The addition of the model-related imports enhances the testing coverage by incorporating different model configurations. This change does not introduce any compatibility issues and is a positive step towards comprehensive testing.


24-26: Verify the usage of eval_model across the codebase.

The change in the import path for eval_model aligns with the PR objective of relocating the function to the tests directory. Please ensure that all other occurrences of eval_model in the codebase have been updated to use the new import path.

Run the following script to verify the usage of eval_model:

Verification successful

Import change for eval_model is correctly implemented and consistent.

The eval_model function is now properly imported from the ..common module in the file source/tests/pt/model/test_autodiff.py. This change is consistent with other test files in the same directory. The relocation of eval_model to the tests directory has been successfully implemented, and there are no remaining old import paths found in the codebase.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all imports of `eval_model` match the new path.

# Test: Search for the function import. Expect: Only occurrences of the new import path.
rg --type python -w $'from \.\.(common|tests\.common) import eval_model'

Length of output: 104


Script:

#!/bin/bash
# Description: Verify all imports of `eval_model` in Python files

# Find all Python files
echo "Python files in the repository:"
fd '\.py$'

echo "\nImports of eval_model:"
# Search for any import of eval_model
rg 'from .* import .*eval_model' --type py

echo "\nSpecific import from ..common:"
# Search for the specific new import path
rg 'from \.\.(common|tests\.common) import .*eval_model' --type py

echo "\nPotential old import paths:"
# Search for potential old import paths (assuming it might have been in a 'utils' or 'helpers' module)
rg 'from .*(utils|helpers).* import .*eval_model' --type py

Length of output: 25334


Script:

#!/bin/bash
# Description: Search for any occurrence of eval_model and check the reviewed file

echo "Searching for any occurrence of eval_model:"
rg "eval_model" --type py

echo "\nChecking the content of the file under review:"
cat source/tests/pt/model/test_autodiff.py

Length of output: 11841

source/tests/pt/model/test_smooth.py (1)

17-19: LGTM!

The change is consistent with the refactoring effort to move the eval_model function to the tests directory. It improves the structure and maintainability of the codebase without affecting the functionality of the code.

source/tests/pt/model/test_permutation.py (1)

18-20: LGTM!

The updated import statements for eval_model align with the relocation of the function to source/tests/pt/common.py. This change is consistent with the PR objective.

source/tests/pt/common.py Show resolved Hide resolved
source/tests/pt/common.py Outdated Show resolved Hide resolved
source/tests/pt/common.py Show resolved Hide resolved
Copy link

codecov bot commented Sep 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 83.41%. Comparing base (c084b20) to head (8c8553f).
Report is 4 commits behind head on devel.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4153      +/-   ##
==========================================
+ Coverage   83.37%   83.41%   +0.04%     
==========================================
  Files         532      532              
  Lines       52166    52044     -122     
  Branches     3046     3046              
==========================================
- Hits        43493    43413      -80     
+ Misses       7726     7684      -42     
  Partials      947      947              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Co-authored-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@njzjz njzjz added this pull request to the merge queue Sep 21, 2024
Merged via the queue into deepmodeling:devel with commit 6010c73 Sep 21, 2024
60 checks passed
@njzjz njzjz deleted the move-eval-model branch September 21, 2024 17:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants