Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python logging missing console output (solved: workaround with Rich) #685

Open
petitpoua opened this issue Feb 17, 2025 · 0 comments
Open

Comments

@petitpoua
Copy link

Description:

I ran into an issue while trying to capture console output into a log.log file for manual inspection. I initially assumed that interfacing with the Python logging module in the library would be sufficient, but I noticed that most of the console output—particularly the "thinking" process—was missing from my logs. Only a few API calls and minimal details were being logged.

After investigating, I realized that most of the console output is handled by Rich (monitoring.py) rather than the standard logging module. As a result, simply configuring logging didn’t capture all the relevant information.

Workaround: Dual Console Logging with Rich

To solve this, you can create a dual-output system using Rich’s Console class. This setup allows logging to both log.log and the terminal while maintaining proper formatting. I needed this solution because I'm still getting familiar with LLM/agentic frameworks and am not yet ready to dive into an observability platform. Just needed a simple way to test things.

Here is a full working script for reference (simple web search agent writing to both console and log.log file):

from smolagents import HfApiModel, LiteLLMModel, TransformersModel, DuckDuckGoSearchTool
from smolagents.agents import CodeAgent
from smolagents.monitoring import LogLevel
from rich.console import Console

available_inferences = ["hf_api", "transformers", "ollama", "litellm"]
chosen_inference = "litellm"

print(f"Chose model: '{chosen_inference}'")

if chosen_inference == "hf_api":
    model = HfApiModel(model_id="meta-llama/Llama-3.3-70B-Instruct")
elif chosen_inference == "transformers":
    model = TransformersModel(
        model_id="HuggingFaceTB/SmolLM2-1.7B-Instruct",
        device_map="auto",
        max_new_tokens=1000,
    )
elif chosen_inference == "ollama":
    model = LiteLLMModel(
        model_id="ollama_chat/llama3.2",
        api_base="http://localhost:11434",
        api_key="your-api-key",
        num_ctx=8192,
    )
elif chosen_inference == "litellm":
    model = LiteLLMModel(model_id="gpt-4o-mini")


# Create dual console that writes to both file and stdout
class DualConsole:
    def __init__(self, filename):
        self.file_console = Console(
            file=open(filename, "a", encoding="utf-8", errors="replace"),
            force_terminal=False,  # Disable terminal sequences for file
            width=120,  # Wider width for file output
        )
        # Keep terminal console with default settings
        self.std_console = Console()

    def print(self, *args, **kwargs):
        try:
            self.file_console.print(*args, **kwargs, highlight=False)
            self.std_console.print(*args, **kwargs)
        except UnicodeEncodeError as e:
            error_msg = f"Unicode handling error: {str(e)}"
            self.file_console.print(error_msg, style="bold red")
            self.std_console.print(error_msg, style="bold red")


# Initialize dual console before creating agent
dual_console = DualConsole("log.log")

# Initialize CodeAgent with DuckDuckGo search capability
agent = CodeAgent(
    tools=[DuckDuckGoSearchTool()],  # Web search tool for research
    model=model,
    add_base_tools=True, 
    verbosity_level=LogLevel.DEBUG,
)

# Override agent's console with our dual logger
agent.logger.console = dual_console
agent.monitor.logger.console = dual_console  # Also update monitor's console


print(
    "Research Results:",
    agent.run(
        "What are the latest developments in humanoid robot technology as of early 2025? "
        "Find and summarize three key advancements from reliable sources."
    ),
)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant