Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Deepseek-V2 #4650

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open

Support Deepseek-V2 #4650

wants to merge 11 commits into from

Conversation

zwd003
Copy link
Contributor

@zwd003 zwd003 commented May 7, 2024

Description:

This PR introduces support for the recently released DeepSeek-V2 model by DeepSeek-AI.

Key Updates:

  • Model Integration: Successfully integrated the DeepSeek-V2 model, developed by the DeepSeek-AI team, aiming to provide advanced natural language processing capabilities.

Related Resources:

Todo:

  • Efficient Inference Mode: Implement the efficient inference mode described in the paper.

We look forward to community feedback and suggestions to help us continuously improve and refine the integration and inference implementation of the DeepSeek-V2 model.

Testing

from vllm import LLM, SamplingParams

# Sample prompts.
prompts = [
    "User: The future of AI is? Assistant:"
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.0, top_p=1, max_tokens=32)

# Create an LLM.
llm = LLM(model="deepseek-ai/DeepSeek-V2-Chat", tensor_parallel_size=8, max_num_seqs = 1, max_model_len = 1024, trust_remote_code=True, enforce_eager = True)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Prompt: 'User: The future of AI is? Assistant:', Generated text: ' The future of AI, or Artificial Intelligence, is a topic of much speculation and debate. AI has the potential to revolutionize many aspects of our lives, from'

Note: Currently, only the inference method using the Multi-Head Attention (MHA) approach has been implemented, and the efficient inference mode mentioned in the paper has not yet been realized.

@guanjingyu
Copy link

ERROR 05-08 20:22:08 worker_base.py:145] ValueError: Model architectures ['DeepseekV2ForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LlavaForConditionalGeneration', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'XverseForCausalLM']

@guanjingyu
Copy link

it seems the model architecture is not supported in vLLM

@rkooo567
Copy link
Collaborator

rkooo567 commented May 8, 2024

Currently, only the inference method using the Multi-Head Attention (MHA) approach has been implemented, and the efficient inference mode mentioned in the paper has not yet been realized.

What's the reason it is not supported in this PR?

@HappyLynn
Copy link

Hi, with only MHA, is it possible to realize max_model_len = 128k? In my test, may only 12k.

@zhyncs
Copy link

zhyncs commented May 10, 2024

What's the reason it is not supported in this PR?

The internal inference implementation supports MLA. The implementation on vLLM is more about making it support quickly and matching the model parameters with the code. So the efficiency of using it for LLM Serving is not high enough. I think maybe the current PR could be quickly reviewed and merged asap. Subsequent communities can consider implementing an integrated version.

@zhyncs
Copy link

zhyncs commented May 10, 2024

Hi @zwd003 May you merge the latest main branch and fix the conflicts? Thanks.

@yang-ji-coder
Copy link

请问一下目前是否有在开发支持MLA吗

@zwd003 zwd003 reopened this May 11, 2024
@zwd003
Copy link
Contributor Author

zwd003 commented May 11, 2024

Hi @zwd003 May you merge the latest main branch and fix the conflicts? Thanks.

ok

@lyl0404
Copy link

lyl0404 commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!

(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states,
(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

@haiasd
Copy link

haiasd commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!

(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

I encountered the same error

@haiasd
Copy link

haiasd commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!

(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

git checkout 5688e58ca2797a34bd56e75c045d41be6aca1e2b solved this problem

@lyl0404
Copy link

lyl0404 commented May 13, 2024

HI @zwd003 This error occurred during the deployment process. How to solve it? Thanks!
(RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] File "/opt/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, (RayWorkerWrapper pid=52311) ERROR 05-11 18:04:33 worker_base.py:145] TypeError: fused_moe() got an unexpected keyword argument 'num_expert_group'

git checkout 5688e58ca2797a34bd56e75c045d41be6aca1e2b solved this problem

Thanks! :D

@zhangyu68
Copy link

Hi @zwd003 May you merge the latest main branch and fix the conflicts? Thanks.

ok

hello,I encountered this error when the QPS was increased to 2.

[' 根据指令"周日晚上",我们将按照步骤进行处理:\n\n1. 选择']
INFO:werkzeug:172.16.178.41 - - [13/May/2024 12:31:52] "POST /get_data HTTP/1.1" 200 -
Processed prompts:   0%|                                                                                                                                                                            | 0/1 [00:00<?, ?it/s](RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Error executing method execute_model. This might cause deadlock in distributed execution.                                                        | 0/2 [00:00<?, ?it/s]
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return executor(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/worker/worker.py", line 249, in execute_model
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     output = self.model_runner.execute_model(seq_group_metadata_list,
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/worker/model_runner.py", line 787, in execute_model
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     ) = self.prepare_input_tensors(seq_group_metadata_list)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/worker/model_runner.py", line 729, in prepare_input_tensors
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     input_tokens = metadata_dict.pop("input_tokens")
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] KeyError: 'input_tokens'
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Error executing method execute_model. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return executor(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/worker/worker.py", line 237, in execute_model
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     data = broadcast_tensor_dict(src=0)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/workspace/[email protected]/code/vllm/vllm/distributed/communication_op.py", line 216, in broadcast_tensor_dict
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     torch.distributed.broadcast_object_list(recv_metadata_list,
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2674, in broadcast_object_list
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     object_list[i] = _tensor_to_object(obj_view, obj_size, group)
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2362, in _tensor_to_object
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145]     return _unpickler(io.BytesIO(buf)).load()
(RayWorkerWrapper pid=1539303) ERROR 05-13 12:31:53 worker_base.py:145] _pickle.UnpicklingError: invalid load key, '\xea'.
(RayWorkerWrapper pid=1542773) INFO 05-13 12:26:25 model_runner.py:175] Loading model weights took 56.1087 GB [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Connected all trees [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Using non-device net plugin version 0 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO comm 0x55f8f5a608b0 rank 7 nranks 8 cudaDev 7 nvmlDev 7 busId b3000 commId 0x7b5f29ff7a9fb9f5 - Init START [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO NVLS multicast support is not available on dev 7 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO comm 0x55f8f5a608b0 rank 7 nRanks 8 nNodes 1 localRanks 8 localRank 7 MNNVL 0 [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO 16 coll channels, 0 collnet channels, 0 nvls channels, 16 p2p channels, 16 p2p channels per peer [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO comm 0x55f8f5a608b0 rank 7 nranks 8 cudaDev 7 nvmlDev 7 busId b3000 commId 0x7b5f29ff7a9fb9f5 - Init COMPLETE [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2076947 [7] NCCL INFO Channel 15/1 : 7[7] -> 0[0] via P2P/CUMEM/read [repeated 336x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Connected all rings [repeated 7x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Using network IB [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO bootstrapSplit: comm 0x55f8f5a608b0 parent 0x55f8e5006f90 rank 7 nranks 8 color -934961569 key 7 prev 6 next 0 - DONE [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Setting affinity for GPU 7 to ffffffff,00000000,ffffffff,00000000 [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6 [2] -1/-1/-1->7->6 [3] -1/-1/-1->7->6 [4] -1/-1/-1->7->6 [5] -1/-1/-1->7->6 [6] -1/-1/-1->7->6 [7] -1/-1/-1->7->6 [8] -1/-1/-1->7->6 [9] -1/-1/-1->7->6 [10] -1/-1/-1->7->6 [11] -1/-1/-1->7->6 [12] -1/-1/-1->7->6 [13] -1/-1/-1->7->6 [14] -1/-1/-1->7->6 [15] -1/-1/-1->7->6 [repeated 6x across cluster]
(RayWorkerWrapper pid=1542773) cnwla-a800-p01009:1542773:2075575 [7] NCCL INFO P2P Chunksize set to 524288 [repeated 6x across cluster]

@ftgreat
Copy link
Contributor

ftgreat commented May 14, 2024

Could you show me lines about KV compression? Thanks.

@fxgeoffrey
Copy link

加载模型时报如下错误:

Cache shape torch.Size([163840, 64]) [repeated 6x across cluster]
INFO 05-14 22:41:26 model_runner.py:166] Loading model weights took 56.1087 GB
/tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint64_array’:
/tmp/tmpw9q1ie7x/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
for (Py_ssize_t i = 0; i < len; i++) {
^
/tmp/tmpw9q1ie7x/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code
/tmp/tmpw9q1ie7x/main.c: In function ‘list_to_cuuint32_array’:
/tmp/tmpw9q1ie7x/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
for (Py_ssize_t i = 0; i < len; i++) {
^
ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last):
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method
ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks
ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run
ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe
ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts
ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states,
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid](
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in
ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run
ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr
ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj
ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver
ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init
ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils()
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init
ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build
ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd)
ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call
ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd)
ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
python-BaseException
Traceback (most recent call last):
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 146, in execute_method
raise e
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method
return executor(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks
self.model_runner.profile_run()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run
self.execute_model(seqs, kv_caches)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model
hidden_states = model_executable(**execute_model_kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
hidden_states = self.model(input_ids, positions, kv_caches,
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
hidden_states, residual = layer(positions, hidden_states,
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
hidden_states = self.mlp(hidden_states)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
final_hidden_states = fused_moe(hidden_states,
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe
return fused_experts(hidden_states,
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts
invoke_fused_moe_kernel(hidden_states,
File "/home/hadoop-mtai/dolphinfs_hdd_hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
fused_moe_kernel[grid](
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run
device = driver.get_current_device()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr
self._initialize_obj()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj
self._obj = self._init_fn()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver
return CudaDriver()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init
self.utils = CudaUtils()
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init
so = _build("cuda_utils", src_path, tmpdir)
File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build
ret = subprocess.check_call(cc_cmd)
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw9q1ie7x/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpw9q1ie7x', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpw9q1ie7x/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states,
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid](
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 102, in init
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils()
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd)
(RayWorkerWrapper pid=65639) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmps4n0c8gr/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmps4n0c8gr', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmps4n0c8gr/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
(RayWorkerWrapper pid=66371) INFO 05-14 22:41:25 model_runner.py:166] Loading model weights took 56.1087 GB [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpezsumgls/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmpezsumgls', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpezsumgls/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1.
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint64_array’:
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
(RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) {
(RayWorkerWrapper pid=65639) ^
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c: In function ‘list_to_cuuint32_array’:
(RayWorkerWrapper pid=65639) /tmp/tmps4n0c8gr/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
(RayWorkerWrapper pid=65639) for (Py_ssize_t i = 0; i < len; i++) {
(RayWorkerWrapper pid=65639) ^
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint64_array’:
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c: In function ‘list_to_cuuint32_array’:
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] Traceback (most recent call last): [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return executor(*args, **kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 18x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return func(*args, **kwargs) [repeated 18x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/worker.py", line 141, in determine_num_available_blocks [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.model_runner.profile_run() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 873, in profile_run [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/worker/model_runner.py", line 792, in execute_model [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/models/deepseek_v2.py", line 156, in forward [repeated 24x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] final_hidden_states = fused_moe(hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 529, in fused_moe [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return fused_experts(hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 439, in fused_experts [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-mtai/users/fengxin09/vllm_n/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] fused_moe_kernel[grid]( [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/jit.py", line 363, in run [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] device = driver.get_current_device() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 209, in getattr [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._initialize_obj() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self._obj = self._init_fn() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 239, in initialize_driver [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] return CudaDriver() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/runtime/driver.py", line 49, in init [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] self.utils = CudaUtils() [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] so = _build("cuda_utils", src_path, tmpdir) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/build.py", line 106, in _build [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] ret = subprocess.check_call(cc_cmd) [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 373, in check_call [repeated 6x across cluster]
(RayWorkerWrapper pid=66371) ERROR 05-14 22:41:31 worker_base.py:145] raise CalledProcessError(retcode, cmd) [repeated 6x across cluster]
(RayWorkerWrapper pid=66276) ERROR 05-14 22:41:31 worker_base.py:145] subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp4yg1ha_1/main.c', '-O3', '-I/home/hadoop-mtai/.local/lib/python3.9/site-packages/triton/common/../third_party/cuda/include', '-I/home/hadoop-mtai/.conda/envs/wow_vllm/include/python3.9', '-I/tmp/tmp4yg1ha_1', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp4yg1ha_1/cuda_utils.cpython-39-x86_64-linux-gnu.so', '-L/lib64', '-L/lib64']' returned non-zero exit status 1. [repeated 5x across cluster]
(RayWorkerWrapper pid=66276) /tmp/tmp4yg1ha_1/main.c: In function ‘list_to_cuuint32_array’: [repeated 10x across cluster]
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) for (Py_ssize_t i = 0; i < len; i++) { [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) ^ [repeated 12x across cluster]
(RayWorkerWrapper pid=66371) /tmp/tmpezsumgls/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code [repeated 6x across cluster]
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1443, in _kill_process_type
self._kill_process_impl(
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/site-packages/ray/_private/node.py", line 1499, in _kill_process_impl
process.wait(timeout_seconds)
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1189, in wait
return self._wait(timeout=timeout)
File "/home/hadoop-mtai/.conda/envs/wow_vllm/lib/python3.9/subprocess.py", line 1927, in _wait
time.sleep(delay)
KeyboardInterrupt
[rank0]:[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Process finished with exit code 1

@ericg108
Copy link

any update? looking forward to it..

vllm/config.py Outdated
@@ -250,6 +250,9 @@ def get_hidden_size(self) -> int:
return self.hf_text_config.hidden_size

def get_head_size(self) -> int:
if hasattr(self.hf_text_config, "model_type") and self.hf_text_config.model_type=='deepseek_v2':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the head_dim to the huggingface config instead of hard coding this here?

@simon-mo simon-mo mentioned this pull request May 21, 2024
6 tasks
# TODO remove hard code
if hasattr(self.hf_text_config, "model_type"
) and self.hf_text_config.model_type == 'deepseek_v2':
# FlashAttention supports only head_size 32, 64, 128, 256,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true? According to

def get_supported_head_sizes() -> List[int]:
, flash attention supports head size of 192 too. So I think you can remove this, right? And also the related padding code in deepseek_v2.py -- that should make it quite a bit simpler :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, i will test it later with the latest flash attn

return 0.1 * mscale * math.log(scale) + 1.0


class DeepseekScalingRotaryEmbedding(RotaryEmbedding):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is extremely similar to YaRNScalingRotaryEmbedding, can you extend that one instead to support mscale?

@pcmoritz
Copy link
Collaborator

@zwd003 I did the refactoring of the MoE code for you, can you look into the other comments I just added?

@obitoquilt
Copy link

obitoquilt commented May 22, 2024

I used gptq(int4) method to quantize deepseek_v2 model. When i load quantized model with vLLM,i got below error:

vLLM parameters:

--dtype float16 --load-format safetensors --trust-remote-code --tensor-parallel-size 2 --enforce-eager --device cuda --max-model-len 1024

generated config.json

  "quantization_config": {
    "bits": 4,
    "damp_percent": 0.1,
    "desc_act": false,
    "group_size": 128,
    "modules_in_block_to_quantize": null,
    "quant_method": "gptq",
    "sym": true,
    "true_sequential": true
  }
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return executor(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 139, in determine_num_available_blocks
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     self.model_runner.profile_run()
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 888, in profile_run
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     self.execute_model(seqs, kv_caches)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return func(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 808, in execute_model
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     hidden_states = model_executable(**execute_model_kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 429, in forward
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     hidden_states = self.model(input_ids, positions, kv_caches,
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 400, in forward
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     hidden_states, residual = layer(positions, hidden_states,
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 362, in forward
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     hidden_states = self.mlp(hidden_states)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 156, in forward
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     final_hidden_states = fused_moe(hidden_states,
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 357, in fused_moe
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145]     assert hidden_states.shape[1] == w1.shape[2], "Hidden size mismatch"
(RayWorkerWrapper pid=9166) ERROR 05-20 07:10:58 worker_base.py:145] AssertionError: Hidden size mismatch

@Lunatic-Solar
Copy link

I follow the file change to change the file ,but when I use the below code :
python -m vllm.entrypoints.openai.api_server --model /root/DeepSeek-V2-Chat --trust-remote-code
there is a error happen:

INFO 05-22 06:30:51 llm_engine.py:103] Initializing an LLM engine (v0.4.2) with config: model='/root/DeepSeek-V2-Chat', speculative_config=None, tokenizer='/root/DeepSeek-V2-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=163840, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/root/DeepSeek-V2-Chat)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 05-22 06:30:52 selector.py:44] Using FlashAttention-2 backend.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/opt/conda/envs/deepseek/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/opt/conda/envs/deepseek/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]:     exec(code, run_globals)
[rank0]:   File "/root/vllm/vllm/entrypoints/openai/api_server.py", line 186, in <module>
[rank0]:     engine = AsyncLLMEngine.from_engine_args(
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 374, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 328, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 450, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/root/vllm/vllm/engine/llm_engine.py", line 163, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/root/vllm/vllm/executor/executor_base.py", line 41, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/root/vllm/vllm/executor/gpu_executor.py", line 24, in _init_executor
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/root/vllm/vllm/worker/worker.py", line 121, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/root/vllm/vllm/worker/model_runner.py", line 133, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/__init__.py", line 21, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/loader.py", line 227, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/loader.py", line 90, in _initialize_model
[rank0]:     return model_class(config=model_config.hf_config,
[rank0]: TypeError: DeepseekV2ForCausalLM.__init__() got an unexpected keyword argument 'cache_config'

How to solve it?

@gree2
Copy link

gree2 commented May 22, 2024

(deepseek) ailearn@gpts:/data/sdd/models$ cd /data/sdd/models/ ; CUDA_VISIBLE_DEVICES=0,1,2,3 python -m vllm.entrypoints.openai.api_server --gpu-memory-utilization 0.99 --max-model-len 1024 --model DeepSeek-V2-Lite-Chat --enforce-eager --trust-remote-code --tensor-parallel-size 4 --host 0.0.0.0 --port 8008
2024-05-22 23:31:01,969 INFO worker.py:1749 -- Started a local Ray instance.
INFO 05-22 23:31:03 llm_engine.py:100] Initializing an LLM engine (v0.4.2) with config: model='DeepSeek-V2-Lite-Chat', speculative_config=None, tokenizer='DeepSeek-V2-Lite-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=1024, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=DeepSeek-V2-Lite-Chat)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
(RayWorkerWrapper pid=1195524) INFO 05-22 23:31:14 selector.py:81] Cannot use FlashAttention-2 backend because the vllm_flash_attn package is not found. pip install vllm-flash-attn for better performance.
(RayWorkerWrapper pid=1195524) INFO 05-22 23:31:14 selector.py:32] Using XFormers backend.
INFO 05-22 23:31:14 selector.py:81] Cannot use FlashAttention-2 backend because the vllm_flash_attn package is not found. pip install vllm-flash-attn for better performance.
INFO 05-22 23:31:14 selector.py:32] Using XFormers backend.
INFO 05-22 23:31:16 utils.py:638] Found nccl from library /home/ailearn/.config/vllm/nccl/cu11/libnccl.so.2.18.1
(RayWorkerWrapper pid=1195524) INFO 05-22 23:31:16 utils.py:638] Found nccl from library /home/ailearn/.config/vllm/nccl/cu11/libnccl.so.2.18.1
INFO 05-22 23:31:16 pynccl.py:65] vLLM is using nccl==2.18.1
(RayWorkerWrapper pid=1195524) INFO 05-22 23:31:16 pynccl.py:65] vLLM is using nccl==2.18.1
WARNING 05-22 23:31:16 custom_all_reduce.py:69] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(RayWorkerWrapper pid=1195524) WARNING 05-22 23:31:16 custom_all_reduce.py:69] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
Cache shape torch.Size([163840, 64])
(RayWorkerWrapper pid=1195524) Cache shape torch.Size([163840, 64])
INFO 05-22 23:31:21 model_runner.py:167] Loading model weights took 7.3840 GB
(RayWorkerWrapper pid=1195949) INFO 05-22 23:31:21 model_runner.py:167] Loading model weights took 7.3840 GB
(RayWorkerWrapper pid=1195949) INFO 05-22 23:31:14 selector.py:81] Cannot use FlashAttention-2 backend because the vllm_flash_attn package is not found. pip install vllm-flash-attn for better performance. [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(RayWorkerWrapper pid=1195949) INFO 05-22 23:31:14 selector.py:32] Using XFormers backend. [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) INFO 05-22 23:31:16 utils.py:638] Found nccl from library /home/ailearn/.config/vllm/nccl/cu11/libnccl.so.2.18.1 [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) INFO 05-22 23:31:16 pynccl.py:65] vLLM is using nccl==2.18.1 [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) WARNING 05-22 23:31:16 custom_all_reduce.py:69] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) Cache shape torch.Size([163840, 64]) [repeated 2x across cluster]
ERROR 05-22 23:31:23 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
ERROR 05-22 23:31:23 worker_base.py:145] Traceback (most recent call last):
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method
ERROR 05-22 23:31:23 worker_base.py:145] return executor(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks
ERROR 05-22 23:31:23 worker_base.py:145] self.model_runner.profile_run()
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run
ERROR 05-22 23:31:23 worker_base.py:145] self.execute_model(seqs, kv_caches)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model
ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 470, in forward
ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches,
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 441, in forward
ERROR 05-22 23:31:23 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states,
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward
ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.mlp(hidden_states)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward
ERROR 05-22 23:31:23 worker_base.py:145] final_hidden_states = fused_experts(
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts
ERROR 05-22 23:31:23 worker_base.py:145] invoke_fused_moe_kernel(hidden_states,
ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
ERROR 05-22 23:31:23 worker_base.py:145] fused_moe_kernel[grid](
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in
ERROR 05-22 23:31:23 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run
ERROR 05-22 23:31:23 worker_base.py:145] kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute
ERROR 05-22 23:31:23 worker_base.py:145] self._init_handles()
ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
ERROR 05-22 23:31:23 worker_base.py:145] self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
ERROR 05-22 23:31:23 worker_base.py:145] RuntimeError: Triton Error [CUDA]: device kernel image is invalid
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]: return _run_code(code, main_globals, None,
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]: exec(code, run_globals)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/entrypoints/openai/api_server.py", line 169, in
[rank0]: engine = AsyncLLMEngine.from_engine_args(
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/async_llm_engine.py", line 366, in from_engine_args
[rank0]: engine = cls(
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/async_llm_engine.py", line 324, in init
[rank0]: self.engine = self._init_engine(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/async_llm_engine.py", line 442, in _init_engine
[rank0]: return engine_class(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/llm_engine.py", line 172, in init
[rank0]: self._initialize_kv_caches()
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/llm_engine.py", line 249, in _initialize_kv_caches
[rank0]: self.model_executor.determine_num_available_blocks())
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/executor/distributed_gpu_executor.py", line 27, in determine_num_available_blocks
[rank0]: num_blocks = self._run_workers("determine_num_available_blocks", )
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/executor/ray_gpu_executor.py", line 234, in _run_workers
[rank0]: driver_worker_output = self.driver_worker.execute_method(
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 146, in execute_method
[rank0]: raise e
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method
[rank0]: return executor(*args, **kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks
[rank0]: self.model_runner.profile_run()
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run
[rank0]: self.execute_model(seqs, kv_caches)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model
[rank0]: hidden_states = model_executable(**execute_model_kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 470, in forward
[rank0]: hidden_states = self.model(input_ids, positions, kv_caches,
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 441, in forward
[rank0]: hidden_states, residual = layer(positions, hidden_states,
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward
[rank0]: hidden_states = self.mlp(hidden_states)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward
[rank0]: final_hidden_states = fused_experts(
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts
[rank0]: invoke_fused_moe_kernel(hidden_states,
[rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
[rank0]: fused_moe_kernel[grid](
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in
[rank0]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run
[rank0]: kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute
[rank0]: self._init_handles()
[rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
[rank0]: self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
[rank0]: RuntimeError: Triton Error [CUDA]: device kernel image is invalid
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] Traceback (most recent call last):
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return executor(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self.model_runner.profile_run()
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self.execute_model(seqs, kv_caches)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 470, in forward
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches,
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 441, in forward
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states,
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.mlp(hidden_states)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] final_hidden_states = fused_experts(
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] invoke_fused_moe_kernel(hidden_states,
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] fused_moe_kernel[grid](
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self._init_handles()
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
(RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] RuntimeError: Triton Error [CUDA]: device kernel image is invalid
(RayWorkerWrapper pid=1195800) INFO 05-22 23:31:22 model_runner.py:167] Loading model weights took 7.3840 GB [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] Traceback (most recent call last): [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return executor(*args, **kwargs) [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 6x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) [repeated 6x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self.model_runner.profile_run() [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 8x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 8x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 8x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 8x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward [repeated 8x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] final_hidden_states = fused_experts( [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] fused_moe_kernel[grid]( [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self._init_handles() [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary( [repeated 2x across cluster]
(RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] RuntimeError: Triton Error [CUDA]: device kernel image is invalid [repeated 2x across cluster]

@seungduk-yanolja
Copy link

seungduk-yanolja commented May 27, 2024

I follow the file change to change the file ,but when I use the below code : python -m vllm.entrypoints.openai.api_server --model /root/DeepSeek-V2-Chat --trust-remote-code there is a error happen:

INFO 05-22 06:30:51 llm_engine.py:103] Initializing an LLM engine (v0.4.2) with config: model='/root/DeepSeek-V2-Chat', speculative_config=None, tokenizer='/root/DeepSeek-V2-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=163840, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/root/DeepSeek-V2-Chat)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 05-22 06:30:52 selector.py:44] Using FlashAttention-2 backend.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/opt/conda/envs/deepseek/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/opt/conda/envs/deepseek/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]:     exec(code, run_globals)
[rank0]:   File "/root/vllm/vllm/entrypoints/openai/api_server.py", line 186, in <module>
[rank0]:     engine = AsyncLLMEngine.from_engine_args(
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 374, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 328, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 450, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/root/vllm/vllm/engine/llm_engine.py", line 163, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/root/vllm/vllm/executor/executor_base.py", line 41, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/root/vllm/vllm/executor/gpu_executor.py", line 24, in _init_executor
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/root/vllm/vllm/worker/worker.py", line 121, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/root/vllm/vllm/worker/model_runner.py", line 133, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/__init__.py", line 21, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/loader.py", line 227, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/loader.py", line 90, in _initialize_model
[rank0]:     return model_class(config=model_config.hf_config,
[rank0]: TypeError: DeepseekV2ForCausalLM.__init__() got an unexpected keyword argument 'cache_config'

How to solve it?

you can see that cache_config was recently added in this commit:
seungduk-yanolja@0fca3cd

@seungduk-yanolja
Copy link

seungduk-yanolja commented May 27, 2024

I made a fix with recent changes on vLLM.
https://github.com/seungduk-yanolja/vllm-deepseek

Assuming you have an 8xH100 machine, to run,

python -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-V2-Chat -tp 8 --served-model-name deeps
eek --trust-remote-code --max-model-len=2800 --enforce-eager

@gtpgg1013
Copy link

I made a fix with recent changes on vLLM. https://github.com/seungduk-yanolja/vllm-deepseek

Assuming you have an 8xH100 machine, to run,

python -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-V2-Chat -tp 8 --served-model-name deeps
eek --trust-remote-code --max-model-len=2800 --enforce-eager

Thank you for your help. Can I use this option with A100*8 GPU?

@seungduk-yanolja
Copy link

I made a fix with recent changes on vLLM. https://github.com/seungduk-yanolja/vllm-deepseek
Assuming you have an 8xH100 machine, to run,

python -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-V2-Chat -tp 8 --served-model-name deeps
eek --trust-remote-code --max-model-len=2800 --enforce-eager

Thank you for your help. Can I use this option with A100*8 GPU?

I think so. If you need more memory, vLLM will complain about max_model_len so you can decrease it to make it run.

@gtpgg1013
Copy link

I made a fix with recent changes on vLLM. https://github.com/seungduk-yanolja/vllm-deepseek
Assuming you have an 8xH100 machine, to run,

python -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-V2-Chat -tp 8 --served-model-name deeps
eek --trust-remote-code --max-model-len=2800 --enforce-eager

Thank you for your help. Can I use this option with A100*8 GPU?

I think so. If you need more memory, vLLM will complain about max_model_len so you can decrease it to make it run.

Thanks a lot! I am going to test it and share results.

@ZixinxinWang
Copy link

I made a fix with recent changes on vLLM. https://github.com/seungduk-yanolja/vllm-deepseek
Assuming you have an 8xH100 machine, to run,

python -m vllm.entrypoints.openai.api_server --model deepseek-ai/DeepSeek-V2-Chat -tp 8 --served-model-name deeps
eek --trust-remote-code --max-model-len=2800 --enforce-eager

Thank you for your help. Can I use this option with A100*8 GPU?

I think so. If you need more memory, vLLM will complain about max_model_len so you can decrease it to make it run.

Thanks a lot! I am going to test it and share results.

Thank u for ur attempt, so what's the result?

@zwd003
Copy link
Contributor Author

zwd003 commented May 29, 2024

@zwd003 I did the refactoring of the MoE code for you, can you look into the other comments I just added?

OK

@WhatGhost
Copy link

I follow the file change to change the file ,but when I use the below code : python -m vllm.entrypoints.openai.api_server --model /root/DeepSeek-V2-Chat --trust-remote-code there is a error happen:

INFO 05-22 06:30:51 llm_engine.py:103] Initializing an LLM engine (v0.4.2) with config: model='/root/DeepSeek-V2-Chat', speculative_config=None, tokenizer='/root/DeepSeek-V2-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=163840, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/root/DeepSeek-V2-Chat)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 05-22 06:30:52 selector.py:44] Using FlashAttention-2 backend.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/opt/conda/envs/deepseek/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/opt/conda/envs/deepseek/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]:     exec(code, run_globals)
[rank0]:   File "/root/vllm/vllm/entrypoints/openai/api_server.py", line 186, in <module>
[rank0]:     engine = AsyncLLMEngine.from_engine_args(
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 374, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 328, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/root/vllm/vllm/engine/async_llm_engine.py", line 450, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/root/vllm/vllm/engine/llm_engine.py", line 163, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/root/vllm/vllm/executor/executor_base.py", line 41, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/root/vllm/vllm/executor/gpu_executor.py", line 24, in _init_executor
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/root/vllm/vllm/worker/worker.py", line 121, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/root/vllm/vllm/worker/model_runner.py", line 133, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/__init__.py", line 21, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/loader.py", line 227, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/root/vllm/vllm/model_executor/model_loader/loader.py", line 90, in _initialize_model
[rank0]:     return model_class(config=model_config.hf_config,
[rank0]: TypeError: DeepseekV2ForCausalLM.__init__() got an unexpected keyword argument 'cache_config'

How to solve it?

i also met this error .
And i find that DeepseekV2ForCausalLM do not have the cache_config args .
How can i fix this.
Thanks!

@xxll88
Copy link

xxll88 commented May 30, 2024

same problem [rank0]: TypeError: DeepseekV2ForCausalLM.init() got an unexpected keyword argument 'cache_config'

@seungduk-yanolja
Copy link

same problem [rank0]: TypeError: DeepseekV2ForCausalLM.init() got an unexpected keyword argument 'cache_config'

@xxll88 @WhatGhost please use this for now: https://github.com/seungduk-yanolja/vllm-deepseek

@xxll88
Copy link

xxll88 commented May 31, 2024

same problem [rank0]: TypeError: DeepseekV2ForCausalLM.init() got an unexpected keyword argument 'cache_config'

@xxll88 @WhatGhost please use this for now: https://github.com/seungduk-yanolja/vllm-deepseek

谢谢,在CPU环境模型加载没问题,推理时出错:
File "/usr/local/lib/python3.10/dist-packages/vllm-0.4.2+cpu-py3.10-linux-x86_64.egg/vllm/_custom_ops.py", line 289, in moe_align_block_size
vllm_ops.moe_align_block_size(topk_ids, num_experts, block_size,
AttributeError: module 'vllm._C.ops' has no attribute 'moe_align_block_size'

@simon-mo simon-mo mentioned this pull request Jun 3, 2024
2 tasks
@pskun
Copy link

pskun commented Jun 4, 2024

When will this request be merged?

@simon-mo
Copy link
Collaborator

simon-mo commented Jun 6, 2024

Is this PR working? It is also failing lint and have merge conflict. If fixed, please ping us for a review. (cc @youkaichao to help merging in case I'm not available).

@fyabc
Copy link
Contributor

fyabc commented Jun 7, 2024

(deepseekilearn@gpts:/data/sdd/models$ cd /data/sdd/models/ ; CUDA_VISIBLE_DEVICES=0,1,2,3 python -m vllm.entrypoints.openai.api_server --gpu-memory-utilization 0.99 --max-model-len 1024 --model DeepSeek-V2-Lite-Chat --enforce-eager --trust-remote-code --tensor-parallel-size 4 --host 0.0.0.0 --port 8008 2024-05-22 23:31:01,969 INFO worker.py:1749 -- Started a local Ray instance. INFO 05-22 23:31:03 llm_engine.py:100] Initializing an LLM engine (v0.4.2) with config: model='DeepSeek-V2-Lite-Chat', speculative_config=None, tokenizer='DeepSeek-V2-Lite-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=1024, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=DeepSeek-V2-Lite-Chat) Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. (RayWorkerWrapper pid=1195524) INFO 05-22 23:31:14 selector.py:81] Cannot use FlashAttention-2 backend because the vllm_flash_attn package is not found. pip install vllm-flash-attn for better performance. (RayWorkerWrapper pid=1195524) INFO 05-22 23:31:14 selector.py:32] Using XFormers backend. INFO 05-22 23:31:14 selector.py:81] Cannot use FlashAttention-2 backend because the vllm_flash_attn package is not found. pip install vllm-flash-attn for better performance. INFO 05-22 23:31:14 selector.py:32] Using XFormers backend. INFO 05-22 23:31:16 utils.py:638] Found nccl from library /home/ailearn/.config/vllm/nccl/cu11/libnccl.so.2.18.1 (RayWorkerWrapper pid=1195524) INFO 05-22 23:31:16 utils.py:638] Found nccl from library /home/ailearn/.config/vllm/nccl/cu11/libnccl.so.2.18.1 INFO 05-22 23:31:16 pynccl.py:65] vLLM is using nccl==2.18.1 (RayWorkerWrapper pid=1195524) INFO 05-22 23:31:16 pynccl.py:65] vLLM is using nccl==2.18.1 WARNING 05-22 23:31:16 custom_all_reduce.py:69] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. (RayWorkerWrapper pid=1195524) WARNING 05-22 23:31:16 custom_all_reduce.py:69] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. Cache shape torch.Size([163840, 64]) (RayWorkerWrapper pid=1195524) Cache shape torch.Size([163840, 64]) INFO 05-22 23:31:21 model_runner.py:167] Loading model weights took 7.3840 GB (RayWorkerWrapper pid=1195949) INFO 05-22 23:31:21 model_runner.py:167] Loading model weights took 7.3840 GB (RayWorkerWrapper pid=1195949) INFO 05-22 23:31:14 selector.py:81] Cannot use FlashAttention-2 backend because the vllm_flash_attn package is not found. pip install vllm-flash-attn for better performance. [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.) (RayWorkerWrapper pid=1195949) INFO 05-22 23:31:14 selector.py:32] Using XFormers backend. [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) INFO 05-22 23:31:16 utils.py:638] Found nccl from library /home/ailearn/.config/vllm/nccl/cu11/libnccl.so.2.18.1 [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) INFO 05-22 23:31:16 pynccl.py:65] vLLM is using nccl==2.18.1 [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) WARNING 05-22 23:31:16 custom_all_reduce.py:69] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) Cache shape torch.Size([163840, 64]) [repeated 2x across cluster] ERROR 05-22 23:31:23 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. ERROR 05-22 23:31:23 worker_base.py:145] Traceback (most recent call last): ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method ERROR 05-22 23:31:23 worker_base.py:145] return executor(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks ERROR 05-22 23:31:23 worker_base.py:145] self.model_runner.profile_run() ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run ERROR 05-22 23:31:23 worker_base.py:145] self.execute_model(seqs, kv_caches) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 470, in forward ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 441, in forward ERROR 05-22 23:31:23 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.mlp(hidden_states) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward ERROR 05-22 23:31:23 worker_base.py:145] final_hidden_states = fused_experts( ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts ERROR 05-22 23:31:23 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel ERROR 05-22 23:31:23 worker_base.py:145] fused_moe_kernel[grid]( ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in ERROR 05-22 23:31:23 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run ERROR 05-22 23:31:23 worker_base.py:145] kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute ERROR 05-22 23:31:23 worker_base.py:145] self._init_handles() ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles ERROR 05-22 23:31:23 worker_base.py:145] self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary( ERROR 05-22 23:31:23 worker_base.py:145] RuntimeError: Triton Error [CUDA]: device kernel image is invalid [rank0]: Traceback (most recent call last): [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/runpy.py", line 196, in _run_module_as_main [rank0]: return _run_code(code, main_globals, None, [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/runpy.py", line 86, in _run_code [rank0]: exec(code, run_globals) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/entrypoints/openai/api_server.py", line 169, in [rank0]: engine = AsyncLLMEngine.from_engine_args( [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/async_llm_engine.py", line 366, in from_engine_args [rank0]: engine = cls( [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/async_llm_engine.py", line 324, in init [rank0]: self.engine = self._init_engine(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/async_llm_engine.py", line 442, in _init_engine [rank0]: return engine_class(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/llm_engine.py", line 172, in init [rank0]: self._initialize_kv_caches() [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/engine/llm_engine.py", line 249, in _initialize_kv_caches [rank0]: self.model_executor.determine_num_available_blocks()) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/executor/distributed_gpu_executor.py", line 27, in determine_num_available_blocks [rank0]: num_blocks = self._run_workers("determine_num_available_blocks", ) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/executor/ray_gpu_executor.py", line 234, in _run_workers [rank0]: driver_worker_output = self.driver_worker.execute_method( [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 146, in execute_method [rank0]: raise e [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method [rank0]: return executor(*args, **kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks [rank0]: self.model_runner.profile_run() [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run [rank0]: self.execute_model(seqs, kv_caches) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model [rank0]: hidden_states = model_executable(**execute_model_kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 470, in forward [rank0]: hidden_states = self.model(input_ids, positions, kv_caches, [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 441, in forward [rank0]: hidden_states, residual = layer(positions, hidden_states, [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward [rank0]: hidden_states = self.mlp(hidden_states) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward [rank0]: final_hidden_states = fused_experts( [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts [rank0]: invoke_fused_moe_kernel(hidden_states, [rank0]: File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [rank0]: fused_moe_kernel[grid]( [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in [rank0]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run [rank0]: kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute [rank0]: self._init_handles() [rank0]: File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles [rank0]: self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary( [rank0]: RuntimeError: Triton Error [CUDA]: device kernel image is invalid (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] Traceback (most recent call last): (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return executor(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self.model_runner.profile_run() (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self.execute_model(seqs, kv_caches) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 470, in forward (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 441, in forward (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.mlp(hidden_states) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] final_hidden_states = fused_experts( (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] fused_moe_kernel[grid]( (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self._init_handles() (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary( (RayWorkerWrapper pid=1195524) ERROR 05-22 23:31:23 worker_base.py:145] RuntimeError: Triton Error [CUDA]: device kernel image is invalid (RayWorkerWrapper pid=1195800) INFO 05-22 23:31:22 model_runner.py:167] Loading model weights took 7.3840 GB [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution. [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] Traceback (most recent call last): [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker_base.py", line 137, in execute_method [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return executor(*args, **kwargs) [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [repeated 6x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return func(*args, **kwargs) [repeated 6x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/worker.py", line 138, in determine_num_available_blocks [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self.model_runner.profile_run() [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 875, in profile_run [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self.execute_model(seqs, kv_caches) [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/worker/model_runner.py", line 793, in execute_model [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = model_executable(**execute_model_kwargs) [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [repeated 8x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return self._call_impl(*args, **kwargs) [repeated 8x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [repeated 8x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return forward_call(*args, **kwargs) [repeated 8x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/models/deepseek_v2.py", line 163, in forward [repeated 8x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.model(input_ids, positions, kv_caches, [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states, residual = layer(positions, hidden_states, [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] hidden_states = self.mlp(hidden_states) [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] final_hidden_states = fused_experts( [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 455, in fused_experts [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] invoke_fused_moe_kernel(hidden_states, [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/data/sdd/deploy/deepseek/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 246, in invoke_fused_moe_kernel [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] fused_moe_kernel[grid]( [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/runtime/jit.py", line 425, in run [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas, # number of warps/ctas per instance [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 255, in getattribute [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self._init_handles() [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] File "/home/ailearn/.conda/envs/deepseek/lib/python3.10/site-packages/triton/compiler/compiler.py", line 250, in _init_handles [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary( [repeated 2x across cluster] (RayWorkerWrapper pid=1195949) ERROR 05-22 23:31:23 worker_base.py:145] RuntimeError: Triton Error [CUDA]: device kernel image is invalid [repeated 2x across cluster]

get the same error Triton Error [CUDA]: device kernel image is invalid

@njhill
Copy link
Collaborator

njhill commented Jun 11, 2024

@zwd003 do you need any help to get this over the line?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new model Requests to new models
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet