Skip to content

Commit 0d5071b

Browse files
chore: sglang doc, logger setting, version release (#1317)
Co-authored-by: Wendong <[email protected]> Co-authored-by: Wendong-Fan <[email protected]>
1 parent fdc727b commit 0d5071b

12 files changed

+109
-69
lines changed

.github/ISSUE_TEMPLATE/bug_report.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ body:
2626
attributes:
2727
label: What version of camel are you using?
2828
description: Run command `python3 -c 'print(__import__("camel").__version__)'` in your shell and paste the output here.
29-
placeholder: E.g., 0.2.15a0
29+
placeholder: E.g., 0.2.15
3030
validations:
3131
required: true
3232

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ conda create --name camel python=3.10
144144
conda activate camel
145145
146146
# Clone github repo
147-
git clone -b v0.2.15a0 https://github.com/camel-ai/camel.git
147+
git clone -b v0.2.15 https://github.com/camel-ai/camel.git
148148
149149
# Change directory into project directory
150150
cd camel

camel/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
from camel.logger import disable_logging, enable_logging, set_log_level
1616

17-
__version__ = '0.2.15a0'
17+
__version__ = '0.2.15'
1818

1919
__all__ = [
2020
'__version__',

camel/logger.py

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,18 +26,24 @@ def _configure_library_logging():
2626

2727
if not logging.root.handlers and not _logger.handlers:
2828
logging.basicConfig(
29-
level=os.environ.get('LOGLEVEL', 'INFO').upper(),
29+
level=os.environ.get('CAMEL_LOGGING_LEVEL', 'WARNING').upper(),
3030
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
3131
stream=sys.stdout,
3232
)
3333
logging.setLoggerClass(logging.Logger)
34-
_logger.info("CAMEL library logging has been configured.")
34+
_logger.info(
35+
f"CAMEL library logging has been configured "
36+
f"(level: {_logger.getEffectiveLevel()}). "
37+
f"To change level, use set_log_level() or "
38+
"set CAMEL_LOGGING_LEVEL env var. To disable logging, "
39+
"set CAMEL_LOGGING_DISABLED=true or use disable_logging()"
40+
)
3541
else:
3642
_logger.debug("Existing logger configuration found, using that.")
3743

3844

3945
def disable_logging():
40-
r"""Disable all logging for the Camel library.
46+
r"""Disable all logging for the CAMEL library.
4147
4248
This function sets the log level to a value higher than CRITICAL,
4349
effectively disabling all log messages, and adds a NullHandler to
@@ -55,7 +61,7 @@ def disable_logging():
5561

5662

5763
def enable_logging():
58-
r"""Enable logging for the Camel library.
64+
r"""Enable logging for the CAMEL library.
5965
6066
This function re-enables logging if it was previously disabled,
6167
and configures the library logging using the default settings.
@@ -67,7 +73,7 @@ def enable_logging():
6773

6874

6975
def set_log_level(level):
70-
r"""Set the logging level for the Camel library.
76+
r"""Set the logging level for the CAMEL library.
7177
7278
Args:
7379
level (Union[str, int]): The logging level to set. This can be a string

docs/cookbooks/cot_data_gen_sft_qwen_unsolth_upload_huggingface.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
"outputs": [],
7272
"source": [
7373
"%%capture\n",
74-
"!pip install camel-ai==0.2.15a0"
74+
"!pip install camel-ai==0.2.15"
7575
]
7676
},
7777
{

docs/cookbooks/cot_data_gen_upload_to_huggingface.py.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
"outputs": [],
7272
"source": [
7373
"%%capture\n",
74-
"!pip install camel-ai==0.2.15a0"
74+
"!pip install camel-ai==0.2.15"
7575
]
7676
},
7777
{

docs/cookbooks/customer_service_Discord_bot_using_local_model_with_agentic_RAG.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
"outputId": "72f5cfe5-60ea-48ba-e1e1-a0cdc4eb87ec"
5252
},
5353
"source": [
54-
"!pip install \"camel-ai[all]==0.2.15a0\"\n",
54+
"!pip install \"camel-ai[all]==0.2.15\"\n",
5555
"!pip install starlette\n",
5656
"!pip install nest_asyncio"
5757
],

docs/get_started/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ conda create --name camel python=3.10
6060
conda activate camel
6161
6262
# Clone github repo
63-
git clone -b v0.2.15a0 https://github.com/camel-ai/camel.git
63+
git clone -b v0.2.15 https://github.com/camel-ai/camel.git
6464
6565
# Change directory into project directory
6666
cd camel

docs/key_modules/models.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@ The following table lists currently supported model platforms by CAMEL.
8181
| vLLM | https://docs.vllm.ai/en/latest/models/supported_models.html | ----- |
8282
| Together AI | https://docs.together.ai/docs/chat-models | ----- |
8383
| LiteLLM | https://docs.litellm.ai/docs/providers | ----- |
84+
| SGLang | https://sgl-project.github.io/references/supported_models.html | ----- |
8485

8586
## 3. Using Models by API calling
8687

@@ -222,6 +223,35 @@ assistant_response = agent.step(user_msg)
222223
print(assistant_response.msg.content)
223224
```
224225

226+
### 4.3 Using SGLang to Set meta-llama/Llama Locally
227+
228+
Install [SGLang](https://sgl-project.github.io/start/install.html) first.
229+
230+
Create and run following script (more details please refer to this [example](https://github.com/camel-ai/camel/blob/master/examples/models/sglang_model_example.py)):
231+
232+
```python
233+
from camel.agents import ChatAgent
234+
from camel.messages import BaseMessage
235+
from camel.models import ModelFactory
236+
from camel.types import ModelPlatformType
237+
238+
sglang_model = ModelFactory.create(
239+
model_platform=ModelPlatformType.SGLANG,
240+
model_type="meta-llama/Llama-3.2-1B-Instruct",
241+
model_config_dict={"temperature": 0.0},
242+
api_key="sglang",
243+
)
244+
245+
agent_sys_msg = "You are a helpful assistant."
246+
247+
agent = ChatAgent(agent_sys_msg, model=sglang_model, token_limit=4096)
248+
249+
user_msg = "Say hi to CAMEL AI"
250+
251+
assistant_response = agent.step(user_msg)
252+
print(assistant_response.msg.content)
253+
```
254+
225255
## 5. About Model Speed
226256
Model speed is a crucial factor in AI application performance. It affects both user experience and system efficiency, especially in real-time or interactive tasks. In [this notebook](../cookbooks/model_speed_comparison.ipynb), we compared several models, including OpenAI’s GPT-4O Mini, GPT-4O, O1 Preview, and SambaNova's Llama series, by measuring the number of tokens each model processes per second.
227257

@@ -233,6 +263,8 @@ The chart below illustrates the tokens per second achieved by each model during
233263

234264
![Model Speed Comparison](https://i.postimg.cc/4xByytyZ/model-speed.png)
235265

266+
For local inference, we conducted a straightforward comparison locally between vLLM and SGLang. SGLang demonstrated superior performance, with `meta-llama/Llama-3.2-1B-Instruct` reaching a peak speed of 220.98 tokens per second, compared to vLLM, which capped at 107.2 tokens per second.
267+
236268
## 6. Conclusion
237269
In conclusion, CAMEL empowers developers to explore and integrate these diverse models, unlocking new possibilities for innovative AI applications. The world of large language models offers a rich tapestry of options beyond just the well-known proprietary solutions. By guiding users through model selection, environment setup, and integration, CAMEL bridges the gap between cutting-edge AI research and practical implementation. Its hybrid approach, combining in-house implementations with third-party integrations, offers unparalleled flexibility and comprehensive support for LLM-based development. Don't just watch this transformation that is happening from the sidelines.
238270

examples/models/sglang_model_example.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,15 +31,17 @@
3131
3232
Please load HF_token in your environment variable.
3333
export HF_TOKEN=""
34+
When using the OpenAI interface to run SGLang model server,
35+
the base model may fail to recognize huggingface default
36+
chat template, switching to the Instruct model resolves the issue.
3437
"""
3538
load_dotenv()
3639
sglang_model = ModelFactory.create(
3740
model_platform=ModelPlatformType.SGLANG,
38-
model_type="meta-llama/Llama-3.2-1B",
41+
model_type="meta-llama/Llama-3.2-1B-Instruct",
3942
model_config_dict={"temperature": 0.0},
4043
api_key="sglang",
4144
)
42-
4345
assistant_sys_msg = "You are a helpful assistant."
4446

4547
agent = ChatAgent(assistant_sys_msg, model=sglang_model, token_limit=4096)

0 commit comments

Comments
 (0)