XGenerationLab commited on
Commit
c779ec3
·
verified ·
1 Parent(s): f091199

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -2
README.md CHANGED
@@ -59,7 +59,7 @@ transformers >= 4.37.0
59
  Here is a simple code snippet for quickly using **XiYanSQL-QwenCoder** model. We provide a Chinese version of the prompt, and you just need to replace the placeholders for "question," "db_schema," and "evidence" to get started. We recommend using our [M-Schema](https://github.com/XGenerationLab/M-Schema) format for the schema; other formats such as DDL are also acceptable, but they may affect performance.
60
  Currently, we mainly support mainstream dialects like SQLite, PostgreSQL, and MySQL.
61
 
62
- ```
63
  nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并理解下面的【数据库schema】描述,以及可能用到的【参考信息】,并运用{dialect}知识生成sql语句回答【用户问题】。
64
  【用户问题】
65
  {question}
@@ -74,7 +74,7 @@ nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并
74
  import torch
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
 
77
- model_name = "XGenerationLab/XiYanSQL-QwenCoder-32B-2412"
78
  model = AutoModelForCausalLM.from_pretrained(
79
  model_name,
80
  torch_dtype=torch.bfloat16,
@@ -105,6 +105,32 @@ generated_ids = [
105
  ]
106
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
107
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
  ## Acknowledgments
109
  If you find our work useful, please give us a citation or a like, so we can make a greater contribution to the open-source community!
110
  ```bibtex
 
59
  Here is a simple code snippet for quickly using **XiYanSQL-QwenCoder** model. We provide a Chinese version of the prompt, and you just need to replace the placeholders for "question," "db_schema," and "evidence" to get started. We recommend using our [M-Schema](https://github.com/XGenerationLab/M-Schema) format for the schema; other formats such as DDL are also acceptable, but they may affect performance.
60
  Currently, we mainly support mainstream dialects like SQLite, PostgreSQL, and MySQL.
61
 
62
+ ```python
63
  nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并理解下面的【数据库schema】描述,以及可能用到的【参考信息】,并运用{dialect}知识生成sql语句回答【用户问题】。
64
  【用户问题】
65
  {question}
 
74
  import torch
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
 
77
+ model_name = "XGenerationLab/XiYanSQL-QwenCoder-14B-2502"
78
  model = AutoModelForCausalLM.from_pretrained(
79
  model_name,
80
  torch_dtype=torch.bfloat16,
 
105
  ]
106
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
107
  ```
108
+
109
+ ### Inference with vLLM
110
+ ```python
111
+ from vllm import LLM, SamplingParams
112
+ from transformers import AutoTokenizer
113
+ model_path = "XGenerationLab/XiYanSQL-QwenCoder-14B-2502"
114
+ llm = LLM(model=model_path, tensor_parallel_size=8)
115
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
116
+ sampling_params = SamplingParams(
117
+ n=1,
118
+ temperature=0.1,
119
+ max_tokens=1024
120
+ )
121
+ ## dialects -> ['SQLite', 'PostgreSQL', 'MySQL']
122
+ prompt = nl2sqlite_template_cn.format(dialect="", db_schema="", question="", evidence="")
123
+ message = [{'role': 'user', 'content': prompt}]
124
+ text = tokenizer.apply_chat_template(
125
+ message,
126
+ tokenize=False,
127
+ add_generation_prompt=True
128
+ )
129
+ outputs = llm.generate([text], sampling_params=sampling_params)
130
+ response = outputs[0].outputs[0].text
131
+ ```
132
+
133
+
134
  ## Acknowledgments
135
  If you find our work useful, please give us a citation or a like, so we can make a greater contribution to the open-source community!
136
  ```bibtex