Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

https://github.com/tloen/alpaca-lora/pull/396 #13

Open
juntao opened this issue May 7, 2023 · 1 comment
Open

https://github.com/tloen/alpaca-lora/pull/396 #13

juntao opened this issue May 7, 2023 · 1 comment

Comments

@juntao
Copy link
Contributor

juntao commented May 7, 2023

No description provided.

Copy link
Contributor Author

juntao commented May 7, 2023

Hello, I am a code review bot on flows.network. Here are my reviews of changed source code files in this PR.


finetune.py

Overall, the code looks good with just a few minor suggestions for improvement.

  1. Add docstrings to functions and classes to improve readability and understandability. This will make it easier for other developers to understand the purpose of the functions and their input parameters.

  2. Constants such as 0 for the padding token id, and the model file names pytorch_model.bin and adapter_model.bin can be defined as global constants at the beginning of the script. This will make it easier to manage and modify them.

  3. It's better to have a default value for resume_from_checkpoint which is an empty string or None. You can then use a simple condition to check whether it's provided or not, instead of writing complex conditions.

  4. The train function can be refactored into smaller functions for better organization and code readability. For example, you could create separate functions for model and tokenizer initialization, data loading, and training configuration.

Other than these suggestions, the code looks well-structured and should work as intended.

generate.py

Here are some potential issues and suggestions for improvement in the provided source code:

  1. Instead of using sys.platform != "win32" in the if condition on line 62, I'd suggest using platform.system() != "Windows" from the platform module. It is recommended to use platform.system() to determine the underlying operating system.

    You'll need to import platform at the beginning of your code:

    import platform
  2. Consider adding docstrings for functions main, evaluate, and generate_prompt to provide explanations for their respective functionalities.

  3. The try block on lines 19-21 might cause silent failures since you're using a generic except without specifying an exception type. It is a good idea to specify the expected exception type for better error handling. For example:

    try:
        if torch.backends.mps.is_available():
            device = "mps"
    except AttributeError:
        pass
  4. Use more descriptive variable names for better code readability. For instance, change s to generated_sequence on line 98.

  5. The code for creating a PeftModel has duplicate lines in each condition block based on the device. To reduce redundancy, you can wrap the entire process in a function and call it with different arguments depending on the device. Here's what the function might look like:

    def create_peft_model(device, base_model, lora_weights, torch_dtype, low_cpu_mem_usage=None, load_in_8bit=None):
        model = LlamaForCausalLM.from_pretrained(
            base_model,
            device_map={"": device},
            torch_dtype=torch_dtype,
            low_cpu_mem_usage=low_cpu_mem_usage,
            load_in_8bit=load_in_8bit,
        )
        return PeftModel.from_pretrained(
            model,
            lora_weights,
            device_map={"": device},
            torch_dtype=torch_dtype,
        )

    Then use this function in the main function to create the model:

    if device == "cuda":
        model = create_peft_model(device, base_model, lora_weights, torch.float16, load_in_8bit=True)
    elif device == "mps":
        model = create_peft_model(device, base_model, lora_weights, torch.float16)
    else:
        model = create_peft_model(device, base_model, lora_weights, torch_dtype=None, low_cpu_mem_usage=True)

These are the major issues and suggestions that I see in the provided code. Implementing these changes should improve its readability, maintainability, and error handling.

This patch modifies the assert statement within the main function. The change is purely cosmetic, and the purpose of the statement remains the same: to ensure that a base_model value is provided when calling the function.

Before the patch, the assertion error message was wrapped in parentheses on separate lines. After the patch, the entire assert statement is wrapped in parentheses, keeping both the assertion condition and the error message together, which makes the code slightly more readable.

cc tloen/alpaca-lora#396

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant