Skip to content

Commit

Permalink
chore: sglang doc, logger setting, version release (#1317)
Browse files Browse the repository at this point in the history
Co-authored-by: Wendong <[email protected]>
Co-authored-by: Wendong-Fan <[email protected]>
  • Loading branch information
3 people authored Jan 2, 2025
1 parent fdc727b commit 0d5071b
Show file tree
Hide file tree
Showing 12 changed files with 109 additions and 69 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ body:
attributes:
label: What version of camel are you using?
description: Run command `python3 -c 'print(__import__("camel").__version__)'` in your shell and paste the output here.
placeholder: E.g., 0.2.15a0
placeholder: E.g., 0.2.15
validations:
required: true

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ conda create --name camel python=3.10
conda activate camel
# Clone github repo
git clone -b v0.2.15a0 https://github.com/camel-ai/camel.git
git clone -b v0.2.15 https://github.com/camel-ai/camel.git
# Change directory into project directory
cd camel
Expand Down
2 changes: 1 addition & 1 deletion camel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

from camel.logger import disable_logging, enable_logging, set_log_level

__version__ = '0.2.15a0'
__version__ = '0.2.15'

__all__ = [
'__version__',
Expand Down
16 changes: 11 additions & 5 deletions camel/logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,24 @@ def _configure_library_logging():

if not logging.root.handlers and not _logger.handlers:
logging.basicConfig(
level=os.environ.get('LOGLEVEL', 'INFO').upper(),
level=os.environ.get('CAMEL_LOGGING_LEVEL', 'WARNING').upper(),
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
stream=sys.stdout,
)
logging.setLoggerClass(logging.Logger)
_logger.info("CAMEL library logging has been configured.")
_logger.info(
f"CAMEL library logging has been configured "
f"(level: {_logger.getEffectiveLevel()}). "
f"To change level, use set_log_level() or "
"set CAMEL_LOGGING_LEVEL env var. To disable logging, "
"set CAMEL_LOGGING_DISABLED=true or use disable_logging()"
)
else:
_logger.debug("Existing logger configuration found, using that.")


def disable_logging():
r"""Disable all logging for the Camel library.
r"""Disable all logging for the CAMEL library.
This function sets the log level to a value higher than CRITICAL,
effectively disabling all log messages, and adds a NullHandler to
Expand All @@ -55,7 +61,7 @@ def disable_logging():


def enable_logging():
r"""Enable logging for the Camel library.
r"""Enable logging for the CAMEL library.
This function re-enables logging if it was previously disabled,
and configures the library logging using the default settings.
Expand All @@ -67,7 +73,7 @@ def enable_logging():


def set_log_level(level):
r"""Set the logging level for the Camel library.
r"""Set the logging level for the CAMEL library.
Args:
level (Union[str, int]): The logging level to set. This can be a string
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@
"outputs": [],
"source": [
"%%capture\n",
"!pip install camel-ai==0.2.15a0"
"!pip install camel-ai==0.2.15"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/cookbooks/cot_data_gen_upload_to_huggingface.py.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@
"outputs": [],
"source": [
"%%capture\n",
"!pip install camel-ai==0.2.15a0"
"!pip install camel-ai==0.2.15"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"outputId": "72f5cfe5-60ea-48ba-e1e1-a0cdc4eb87ec"
},
"source": [
"!pip install \"camel-ai[all]==0.2.15a0\"\n",
"!pip install \"camel-ai[all]==0.2.15\"\n",
"!pip install starlette\n",
"!pip install nest_asyncio"
],
Expand Down
2 changes: 1 addition & 1 deletion docs/get_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ conda create --name camel python=3.10
conda activate camel
# Clone github repo
git clone -b v0.2.15a0 https://github.com/camel-ai/camel.git
git clone -b v0.2.15 https://github.com/camel-ai/camel.git
# Change directory into project directory
cd camel
Expand Down
32 changes: 32 additions & 0 deletions docs/key_modules/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ The following table lists currently supported model platforms by CAMEL.
| vLLM | https://docs.vllm.ai/en/latest/models/supported_models.html | ----- |
| Together AI | https://docs.together.ai/docs/chat-models | ----- |
| LiteLLM | https://docs.litellm.ai/docs/providers | ----- |
| SGLang | https://sgl-project.github.io/references/supported_models.html | ----- |

## 3. Using Models by API calling

Expand Down Expand Up @@ -222,6 +223,35 @@ assistant_response = agent.step(user_msg)
print(assistant_response.msg.content)
```

### 4.3 Using SGLang to Set meta-llama/Llama Locally

Install [SGLang](https://sgl-project.github.io/start/install.html) first.

Create and run following script (more details please refer to this [example](https://github.com/camel-ai/camel/blob/master/examples/models/sglang_model_example.py)):

```python
from camel.agents import ChatAgent
from camel.messages import BaseMessage
from camel.models import ModelFactory
from camel.types import ModelPlatformType

sglang_model = ModelFactory.create(
model_platform=ModelPlatformType.SGLANG,
model_type="meta-llama/Llama-3.2-1B-Instruct",
model_config_dict={"temperature": 0.0},
api_key="sglang",
)

agent_sys_msg = "You are a helpful assistant."

agent = ChatAgent(agent_sys_msg, model=sglang_model, token_limit=4096)

user_msg = "Say hi to CAMEL AI"

assistant_response = agent.step(user_msg)
print(assistant_response.msg.content)
```

## 5. About Model Speed
Model speed is a crucial factor in AI application performance. It affects both user experience and system efficiency, especially in real-time or interactive tasks. In [this notebook](../cookbooks/model_speed_comparison.ipynb), we compared several models, including OpenAI’s GPT-4O Mini, GPT-4O, O1 Preview, and SambaNova's Llama series, by measuring the number of tokens each model processes per second.

Expand All @@ -233,6 +263,8 @@ The chart below illustrates the tokens per second achieved by each model during

![Model Speed Comparison](https://i.postimg.cc/4xByytyZ/model-speed.png)

For local inference, we conducted a straightforward comparison locally between vLLM and SGLang. SGLang demonstrated superior performance, with `meta-llama/Llama-3.2-1B-Instruct` reaching a peak speed of 220.98 tokens per second, compared to vLLM, which capped at 107.2 tokens per second.

## 6. Conclusion
In conclusion, CAMEL empowers developers to explore and integrate these diverse models, unlocking new possibilities for innovative AI applications. The world of large language models offers a rich tapestry of options beyond just the well-known proprietary solutions. By guiding users through model selection, environment setup, and integration, CAMEL bridges the gap between cutting-edge AI research and practical implementation. Its hybrid approach, combining in-house implementations with third-party integrations, offers unparalleled flexibility and comprehensive support for LLM-based development. Don't just watch this transformation that is happening from the sidelines.

Expand Down
6 changes: 4 additions & 2 deletions examples/models/sglang_model_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,17 @@
Please load HF_token in your environment variable.
export HF_TOKEN=""
When using the OpenAI interface to run SGLang model server,
the base model may fail to recognize huggingface default
chat template, switching to the Instruct model resolves the issue.
"""
load_dotenv()
sglang_model = ModelFactory.create(
model_platform=ModelPlatformType.SGLANG,
model_type="meta-llama/Llama-3.2-1B",
model_type="meta-llama/Llama-3.2-1B-Instruct",
model_config_dict={"temperature": 0.0},
api_key="sglang",
)

assistant_sys_msg = "You are a helpful assistant."

agent = ChatAgent(assistant_sys_msg, model=sglang_model, token_limit=4096)
Expand Down
Loading

0 comments on commit 0d5071b

Please sign in to comment.