Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for tools for the ollama provider #662

Draft
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

humcqc
Copy link
Contributor

@humcqc humcqc commented Jun 10, 2024

Proposition for : #305, tested on llama3, does not work yet with other models.
Draft to discuss the proposition.
Based on experimental python and discussion here

It's way to have tools working untill ollama fix will be available.

To discuss if we want this in langchain or quarkus-langchain or both.

@jmartisk @langchain4j @geoand WDYT ?

Copy link
Collaborator

@geoand geoand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really interesting.

Just so it's clear - does this work with the latest Ollama version or do we still need to wait for that feature to land?

import io.quarkus.test.QuarkusUnitTest;

@Disabled("Integration tests that need an ollama server running")
public class ToolsTest {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally don't write such tests, but instead use Wiremock (see the OpenAI module for tools related tests)

Comment on lines 120 to 123
return toolSpecifications.stream()
.filter(ts -> ts.name().equals(toolResponse.tool))
.map(ts -> toToolExecutionRequest(toolResponse, ts))
.toList();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally try hard to avoid lambdas in Quarkus code

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why (just curious)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When Quarkus started, the team found that the lambdas had a small (but not zero) impact on memory usage.

Mind you, this on Java 8, so things may have changed substantially since then, but we still try to avoid them unless the alternative is just plain terrible.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 10, 2024

Just so it's clear - does this work with the latest Ollama version or do we still need to wait for that feature to land?

Yes it works with the latest Ollama version.

@geoand
Copy link
Collaborator

geoand commented Jun 10, 2024

Very nice, I'll give it a try tomorrow

@geoand
Copy link
Collaborator

geoand commented Jun 11, 2024

This is super interesting, but unfortunately it does not work properly :(.

The issue seems to be that Ollama does not understand that the tool has been executed and keeps telling us to re-execute it.
Here is a sample interaction using the email-me-a-poem sample:

1st request:

2024-06-11 09:20:26,952 INFO  [io.qua.lan.oll.OllamaRestApi$OllamaLogger] (vert.x-eventloop-thread-2) Request:
- method: POST
- url: http://localhost:11434/api/chat
- headers: [Accept: application/json], [Content-Type: application/json], [User-Agent: Resteasy Reactive Client], [content-length: 1706]
- body: {
  "model" : "llama3",
  "messages" : [ {
    "role" : "SYSTEM",
    "content" : "You are a professional poet\nYou have access to the following tools:\n\n[ {\n  \"name\" : \"sendAnEmail\",\n  \"description\" : \"send the given content by email\",\n  \"parameters\" : {\n    \"type\" : \"object\",\n    \"properties\" : {\n      \"content\" : {\n        \"type\" : \"string\"\n      }\n    },\n    \"required\" : [ \"content\" ]\n  }\n}, {\n  \"name\" : \"__conversational_response\",\n  \"description\" : \"Respond conversationally if no other tools should be called for a given query and history.\",\n  \"parameters\" : {\n    \"type\" : \"object\",\n    \"properties\" : {\n      \"reponse\" : {\n        \"type\" : \"string\",\n        \"description\" : \"Conversational response to the user.\"\n      }\n    },\n    \"required\" : [ \"response\" ]\n  }\n} ]\n\nYou must always select one of the above tools and respond with a JSON object matching the following schema,\nand only this json object:\n{\n  \"tool\": <name of the selected tool>,\n  \"tool_input\": <parameters for the selected tool, matching the tool's JSON schema>\n}\nDo not use other tools than the ones from the list above. Always provide the \"tool_input\" field.\nIf several tools are necessary, answer them sequentially.\n\nWhen the user provides sufficient information, answer with the __conversational_response tool.\n"
  }, {
    "role" : "USER",
    "content" : "Write a poem about Quarkus. The poem should be 4 lines long.\nThen send this poem by email. Your response should include the poem.\n"
  } ],
  "options" : {
    "temperature" : 0.8,
    "top_k" : 40,
    "top_p" : 0.9
  },
  "format" : "json",
  "stream" : false
}

1st response:

2024-06-11 09:20:27,939 INFO  [io.qua.lan.oll.OllamaRestApi$OllamaLogger] (vert.x-eventloop-thread-2) Response:
- status code: 200
- headers: [Content-Type: application/json; charset=utf-8], [Date: Tue, 11 Jun 2024 06:20:27 GMT], [Content-Length: 537]
- body: {"model":"llama3","created_at":"2024-06-11T06:20:27.933221683Z","message":{"role":"assistant","content":"{ \"tool\": \"sendAnEmail\", \"tool_input\": \n  { \"content\": \n    \"In Quarkus, where Java flows free,\nA stream of innovation, for you and me.\nWith microservices, it's a world to see,\nA new way to code, wild and carefree.\" } }\n\n\n\n  \n "},"done_reason":"stop","done":true,"total_duration":980876099,"load_duration":913527,"prompt_eval_count":224,"prompt_eval_duration":127446000,"eval_count":70,"eval_duration":718718000}

After this the extension properly executed the tool:

2024-06-11 09:20:27,993 INFO  [io.qua.lan.sam.EmailService] (executor-thread-1) Sending an email

Then the following is sent to Ollama:

2024-06-11 09:20:28,002 INFO  [io.qua.lan.oll.OllamaRestApi$OllamaLogger] (vert.x-eventloop-thread-2) Request:
- method: POST
- url: http://localhost:11434/api/chat
- headers: [Accept: application/json], [Content-Type: application/json], [User-Agent: Resteasy Reactive Client], [content-length: 1792]
- body: {
  "model" : "llama3",
  "messages" : [ {
    "role" : "SYSTEM",
    "content" : "You are a professional poet\nYou have access to the following tools:\n\n[ {\n  \"name\" : \"sendAnEmail\",\n  \"description\" : \"send the given content by email\",\n  \"parameters\" : {\n    \"type\" : \"object\",\n    \"properties\" : {\n      \"content\" : {\n        \"type\" : \"string\"\n      }\n    },\n    \"required\" : [ \"content\" ]\n  }\n}, {\n  \"name\" : \"__conversational_response\",\n  \"description\" : \"Respond conversationally if no other tools should be called for a given query and history.\",\n  \"parameters\" : {\n    \"type\" : \"object\",\n    \"properties\" : {\n      \"reponse\" : {\n        \"type\" : \"string\",\n        \"description\" : \"Conversational response to the user.\"\n      }\n    },\n    \"required\" : [ \"response\" ]\n  }\n} ]\n\nYou must always select one of the above tools and respond with a JSON object matching the following schema,\nand only this json object:\n{\n  \"tool\": <name of the selected tool>,\n  \"tool_input\": <parameters for the selected tool, matching the tool's JSON schema>\n}\nDo not use other tools than the ones from the list above. Always provide the \"tool_input\" field.\nIf several tools are necessary, answer them sequentially.\n\nWhen the user provides sufficient information, answer with the __conversational_response tool.\n"
  }, {
    "role" : "USER",
    "content" : "Write a poem about Quarkus. The poem should be 4 lines long.\nThen send this poem by email. Your response should include the poem.\n"
  }, {
    "role" : "ASSISTANT"
  }, {
    "role" : "USER",
    "content" : "Success"
  } ],
  "options" : {
    "temperature" : 0.8,
    "top_k" : 40,
    "top_p" : 0.9
  },
  "format" : "json",
  "stream" : false
}

The response however is now problematic:

2024-06-11 09:20:28,888 INFO  [io.qua.lan.oll.OllamaRestApi$OllamaLogger] (vert.x-eventloop-thread-2) Response:
- status code: 200
- headers: [Content-Type: application/json; charset=utf-8], [Date: Tue, 11 Jun 2024 06:20:28 GMT], [Content-Length: 548]
- body: {"model":"llama3","created_at":"2024-06-11T06:20:28.887595616Z","message":{"role":"assistant","content":"{ \"tool\": \"sendAnEmail\", \"tool_input\": { \"content\": \"Quarkus, a framework so fine,\nBuilt for Java, with Quarkus divine.\nIt brings us power, and speed to our code,\nAnd makes our apps shine like a star in the road.\n\nBest regards, [Your Name]\" } }"},"done_reason":"stop","done":true,"total_duration":885540061,"load_duration":1477410,"prompt_eval_count":12,"prompt_eval_duration":68617000,"eval_count":70,"eval_duration":677705000}

As you can see it tells us to execute the tool again... This keep on happening without the sequence ending from the Ollama side.

* Whether to enable the experimental tools
*/
@WithDefault("false")
Optional<Boolean> experimentalTools();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we rather just name it tools and mark it as experimental in a comment? To avoid having to do a breaking change once we don't consider it experimental anymore...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would name it enableTools

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will stay experimental till ollama implements tools feature.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 11, 2024

As you can see it tells us to execute the tool again... This keep on happening without the sequence ending from the Ollama side.

Yes the issue with this approach is that the llm should be aware the tool has been executed. We can have a simplified approach where we just trigger one tool without recursivity. OR the tools should always answer with a status for llm.

I will try to add some example with the sendPoem.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 12, 2024

I've updated the prompt + added some tools history in user messages. But didn't find the good way to avoid selecting twice the same tool. Perhaps @langchain4j @jmartisk or @geoand you can help here ?

@geoand
Copy link
Collaborator

geoand commented Jun 13, 2024

By selecting twice, do you mean the tool gets executed twice?

@humcqc
Copy link
Contributor Author

humcqc commented Jun 13, 2024

By selecting twice, do you mean the tool gets executed twice?

yes

@geoand
Copy link
Collaborator

geoand commented Jun 13, 2024

We can't really do much here, the LLM is supposed to decide which tools need to get executed as complex workflows may need to have multiple tool invocations. OpenAI handles this seamlessly.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 13, 2024

We can't really do much here, the LLM is supposed to decide which tools need to get executed as complex workflows may need to have multiple tool invocations. OpenAI handles this seamlessly.

yes, but it's weird that this one https://github.com/quarkiverse/quarkus-langchain4j/pull/662/files#diff-4cad3d1a7b72dca01c9cf8f6019dfdc9c8949b729fdafe2cbda381631db6f88bR34 seems to work correctly and it is more complex that the send Poem.

I think I'm missing the correct inputs/prompt to tell LLM that the action has been executed.

@geoand
Copy link
Collaborator

geoand commented Jun 13, 2024

In that case, I would turn on logging of requests and responses and compare the one that works with the one that does not.

@geoand
Copy link
Collaborator

geoand commented Jun 13, 2024

By the way, I want to clarify if that we can get this to work properly, it's a no brainer for inclusion :)

@humcqc
Copy link
Contributor Author

humcqc commented Jun 15, 2024

New approach: ask llm to create list of tools to execute and then respond with previous result.
Seems to work with llama3 , not yet with other models.
Tests in https://github.com/quarkiverse/quarkus-langchain4j/pull/662/files#diff-d06a2b262b5211fac51ddebbe50152fd0ea4e93e0ee0ff5f6e764eb5d649827c

Needs some modification in core : https://github.com/quarkiverse/quarkus-langchain4j/pull/662/files#diff-2dd3bec40934ad6d175f6f14dad1af0e11c234cf5fec69739a89460d472ab55b
I add some stuff to use previous tools as input of next tools and response.

I need to check OpenAI broken tests but they don't work on my side.

Still in progress, but the main part could be done in langchain4j and then used in ollama models from langchain and quarkus-langchain.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 16, 2024

https://github.com/quarkiverse/quarkus-langchain4j/pull/662/files#diff-2dd3bec40934ad6d175f6f14dad1af0e11c234cf5fec69739a89460d472ab55bR235

In order to replace some AI response containing variables with function results I've changed the order of chat memory.
With my changes we add function result and then AI response.
But in the tests you are expecting AI response first and then function results.

WDYT ? Could I change the messages order in the tests ? Or should I keep the existing order and adapt the tools executor part.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 24, 2024

IMO yes, I don't see why this should be specific to Quarkus

OK here it is -> langchain4j/langchain4j#1353

Then it's a little bit confusing , there is a lot of code that are not taken from langchain.

@geoand
Copy link
Collaborator

geoand commented Jun 24, 2024

A lot of the tools and AI service related code needs to be different in Quarkus in order to account for doing stuff at build time

@humcqc
Copy link
Contributor Author

humcqc commented Jun 24, 2024

A lot of the tools and AI service related code needs to be different in Quarkus in order to account for doing stuff at build time

yes, it's why i've isolated the new tools stuff in langchain, to be able to use it in quarkus-langchain too.

@geoand
Copy link
Collaborator

geoand commented Jun 24, 2024

🙏

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

I will take another look today

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

I will take another look today

Thanks @geoand , for the moment , the quarkus part is in pause. I've implemented it in langhain4j and will integrate it later in quarkus.

@humcqc humcqc closed this Jun 25, 2024
@humcqc humcqc reopened this Jun 25, 2024
@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

Do you have a LangChain4j example I can try your change on?

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

Do you have a LangChain4j example I can try your change on?

https://github.com/langchain4j/langchain4j/pull/1353/files#diff-424a65e68468de22a04a437a4868784144917f4c87a0b08f49712ce2dfbe2661R19

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

I was not able to get that test to pass locally

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

I was not able to get that test to pass locally

What was the issue ?

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

Assertions failed :)

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

:) ok but which tests ? all tests from AiServicesWithOllamaToolsIT ?
And what are the errors ?

image

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

[ERROR] Failures: 
[ERROR]   AiServicesWithOllamaToolsIT$Llama3Sequential>AiServicesWithOllamaToolsSequentialIT.should_execute_length_sum_square:47 
Expecting actual:
  "The square root of the sum of the numbers of letters in the words 'hello' and 'world' is 3.0."
to contain:
  "3.16" 
[ERROR]   AiServicesWithOllamaToolsIT$Llama3Sequential>AiServicesWithOllamaToolsBaseIT$BaseTests.should_execute_length_sum_square_no_chat_memory:421 
Expecting actual:
  "The square root of the sum of the numbers of letters in the words 'hello' and 'world' is 3.0."
to contain:
  "3.16" 
[ERROR]   AiServicesWithOllamaToolsIT$Llama3Sequential>AiServicesWithOllamaToolsBaseIT$BaseTests.should_give_simple_result:142 
Expecting actual:
  "The result of 1 + 1 is... (drumroll please)... 2!"
to contain:
  "The result of 1+1 is 2." 
[INFO] 
[ERROR] Tests run: 10, Failures: 3, Errors: 0, Skipped: 0

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

I should add that is running against my local Ollama server - not a docker container

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

I should add that is running against my local Ollama server - not a docker container

Your ollama server seems more funny that mine : "The result of 1 + 1 is... (drumroll please)... 2!" :)
I've just updated my ollama server to see if it was related to that. but still working on mine.

Did you tune your ollama server to be more funny after the Quarkus Insight 170 ?

I could adjust the checks to be more flexible.

But it will be better if I succeed to reproduce and adjust the sequential prompt.

Are all the parallel tests working ?

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

Did you tune your ollama server to be more funny after the Quarkus Insight 170 ?

Nope, just stock Ollama

Are all the parallel tests working ?

[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 15.39 s <<< FAILURE! -- in dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Parallel
[ERROR] dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Parallel.should_send_a_poem -- Time elapsed: 1.747 s <<< ERROR!
java.lang.NullPointerException: Cannot invoke "java.util.Map.containsKey(Object)" because "argumentsMap" is null
	at dev.langchain4j.agent.tool.DefaultToolExecutor.prepareArguments(DefaultToolExecutor.java:90)
	at dev.langchain4j.agent.tool.DefaultToolExecutor.execute(DefaultToolExecutor.java:35)
	at dev.langchain4j.service.DefaultAiServices$1.invoke(DefaultAiServices.java:168)
	at jdk.proxy2/jdk.proxy2.$Proxy60.writeAPoem(Unknown Source)
	at dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsBaseIT$BaseTests.should_send_a_poem(AiServicesWithOllamaToolsBaseIT.java:472)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:56)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:184)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:148)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:122)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
	at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)
[ERROR] dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Parallel.should_give_simple_result -- Time elapsed: 0.414 s <<< FAILURE!
java.lang.AssertionError: 
Expecting actual:
  "The result of 1 + 1 is... (drumroll please)... 2!"
to contain:
  "The result of 1+1 is 2." 

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

Weird! Could you test with the ollama container. It will be longer if you are on mac ( 30 minutes ), but just to check if it behave the same.

Perhaps there is a context on your local ollama server that produce those issues.

@humcqc
Copy link
Contributor Author

humcqc commented Jun 25, 2024

I'm checking to use the new classes into quarkus langchain4j, but it seems a lot of standard code have been duplicated: ChatRequest, Response , OllamaClient( this should stay different but could use a dedicated interface) , Roles, Response,

Do you see any issue if I put them back into langchain or if i use interfaces where needed like OllamaClient ?

@geoand
Copy link
Collaborator

geoand commented Jun 25, 2024

Weird! Could you test with the ollama container. It will be longer if you are on mac ( 30 minutes ), but just to check if it behave the same.

I'll try tomorrow.

Do you see any issue if I put them back into langchain or if i use interfaces where needed like OllamaClient ?

I think they were duplicated so we could move fast. The best solution would be to have an approach like we do with the Mistral client where an SPI is introduced in LangChain4j and then used in Quarkus to supply the proper client.

@geoand
Copy link
Collaborator

geoand commented Jun 26, 2024

Using the container, I got this:

[ERROR] Tests run: 10, Failures: 2, Errors: 1, Skipped: 0, Time elapsed: 501.5 s <<< FAILURE! -- in dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Sequential
[ERROR] dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Sequential.should_send_a_poem -- Time elapsed: 18.05 s <<< ERROR!
java.lang.NullPointerException: Cannot invoke "java.util.Map.containsKey(Object)" because "argumentsMap" is null
	at dev.langchain4j.agent.tool.DefaultToolExecutor.prepareArguments(DefaultToolExecutor.java:90)
	at dev.langchain4j.agent.tool.DefaultToolExecutor.execute(DefaultToolExecutor.java:35)
	at dev.langchain4j.service.DefaultAiServices$1.invoke(DefaultAiServices.java:168)
	at jdk.proxy2/jdk.proxy2.$Proxy71.writeAPoem(Unknown Source)
	at dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsBaseIT$BaseTests.should_send_a_poem(AiServicesWithOllamaToolsBaseIT.java:473)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:56)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:184)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:148)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:122)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
	at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

[ERROR] dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Sequential.should_execute_length_sum_square_no_chat_memory -- Time elapsed: 2.242 s <<< FAILURE!
Wanted but not invoked:
calculator.stringLength("hello");
-> at dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsBaseIT$BaseTests.should_execute_length_sum_square_no_chat_memory(AiServicesWithOllamaToolsBaseIT.java:433)

However, there were exactly 2 interactions with this mock:
calculator.add(5, 5);
-> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

calculator.sqrt(10.0d);
-> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)


	at dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsBaseIT$BaseTests.should_execute_length_sum_square_no_chat_memory(AiServicesWithOllamaToolsBaseIT.java:433)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:56)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:184)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:148)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:122)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
	at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

[ERROR] dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsIT$Llama3Sequential.should_execute_length_sum_square -- Time elapsed: 2.276 s <<< FAILURE!
Wanted but not invoked:
calculator.stringLength("hello");
-> at dev.langchain4j.model.ollama.service.AiServicesWithOllamaToolsSequentialIT.should_execute_length_sum_square(AiServicesWithOllamaToolsSequentialIT.java:57)

However, there were exactly 2 interactions with this mock:
calculator.add(5, 5);
-> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

calculator.sqrt(10.0d);
-> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method

@humcqc
Copy link
Contributor Author

humcqc commented Jun 26, 2024

woooo !! could you add the ollama request response logs when there are those errors.
Do you have specific environment variables for java , like -parameters or others ?

During quarkus-langchain integration i've seen the tool definition is not done in the same way in langchain and quarkus-lanchain due to the -parameters and it impact the llm response. and the same test that works in langchain does not work in quarkus-langchain

@geoand
Copy link
Collaborator

geoand commented Jun 26, 2024

Here is the entire interaction:

- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"getTransactionAmount\",\n    \"description\": \"returns amount of a given transaction\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"ID of a transaction\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the amounts of transaction T001?"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:20 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:20 GMT
Content-Length: 344
- body: {"model":"llama3","created_at":"2024-06-26T08:49:20.806168091Z","message":{"role":"assistant","content":"{\"name\": \"getTransactionAmount\", \"inputs\": {\"arg0\": \"T001\"}}"},"done":true,"total_duration":2546471532,"load_duration":2199806868,"prompt_eval_count":389,"prompt_eval_duration":145063000,"eval_count":20,"eval_duration":199170000}
2024-06-26 11:49:20 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "getTransactionAmount", arguments = "{
  "arg0": "T001"
}" } for memoryId default
called getTransactionAmount(T001)
2024-06-26 11:49:20 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 11.1
2024-06-26 11:49:20 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"getTransactionAmount\",\n    \"description\": \"returns amount of a given transaction\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"ID of a transaction\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the amounts of transaction T001?"},{"role":"ASSISTANT","content":"Please provide result for getTransactionAmount ({  \"arg0\": \"T001\"})"},{"role":"USER","content":"Result  of getTransactionAmount ({  \"arg0\": \"T001\"}) is 11.1 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:21 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:21 GMT
Content-Length: 381
- body: {"model":"llama3","created_at":"2024-06-26T08:49:21.264198209Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The amount of transaction T001 is 11.1.\"}}"},"done":true,"total_duration":445105190,"load_duration":271612,"prompt_eval_count":52,"prompt_eval_duration":65938000,"eval_count":30,"eval_duration":375168000}
2024-06-26 11:49:21 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"processStrings\",\n    \"description\": \"Processes array of strings\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"Array of strings to process\",\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"string\"\n        }\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Process strings \u0027cat\u0027 and \u0027dog\u0027 together in a list, do not separate them!"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:21 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:21 GMT
Content-Length: 343
- body: {"model":"llama3","created_at":"2024-06-26T08:49:21.736642598Z","message":{"role":"assistant","content":"{\"name\": \"processStrings\", \"inputs\": {\"arg0\": [\"cat\", \"dog\"]}}"},"done":true,"total_duration":422249802,"load_duration":227532,"prompt_eval_count":374,"prompt_eval_duration":131943000,"eval_count":22,"eval_duration":286917000}
2024-06-26 11:49:21 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "processStrings", arguments = "{
  "arg0": [
    "cat",
    "dog"
  ]
}" } for memoryId default
called processStrings([cat, dog])
2024-06-26 11:49:21 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: Success
2024-06-26 11:49:21 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"processStrings\",\n    \"description\": \"Processes array of strings\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"Array of strings to process\",\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"string\"\n        }\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Process strings \u0027cat\u0027 and \u0027dog\u0027 together in a list, do not separate them!"},{"role":"ASSISTANT","content":"Please provide result for processStrings ({  \"arg0\": [    \"cat\",    \"dog\"  ]})"},{"role":"USER","content":"Result  of processStrings ({  \"arg0\": [    \"cat\",    \"dog\"  ]}) is Success ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:22 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:22 GMT
Content-Length: 383
- body: {"model":"llama3","created_at":"2024-06-26T08:49:22.149059112Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The processed strings are ['cat', 'dog'].\"}}"},"done":true,"total_duration":408868582,"load_duration":274493,"prompt_eval_count":63,"prompt_eval_duration":64698000,"eval_count":28,"eval_duration":340232000}
2024-06-26 11:49:22 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"processStrings\",\n    \"description\": \"Processes list of strings\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"List of strings to process\",\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"string\"\n        }\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Process strings \u0027cat\u0027 and \u0027dog\u0027 together, do not separate them!. Use format [\u0027cat\u0027, \u0027dog\u0027] for the list of strings."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:22 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:22 GMT
Content-Length: 343
- body: {"model":"llama3","created_at":"2024-06-26T08:49:22.581886292Z","message":{"role":"assistant","content":"{\"name\": \"processStrings\", \"inputs\": {\"arg0\": [\"cat\", \"dog\"]}}"},"done":true,"total_duration":417476540,"load_duration":287022,"prompt_eval_count":377,"prompt_eval_duration":139677000,"eval_count":22,"eval_duration":274253000}
2024-06-26 11:49:22 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "processStrings", arguments = "{
  "arg0": [
    "cat",
    "dog"
  ]
}" } for memoryId default
called processStrings([cat, dog])
2024-06-26 11:49:22 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: Success
2024-06-26 11:49:22 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"processStrings\",\n    \"description\": \"Processes list of strings\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"List of strings to process\",\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"string\"\n        }\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Process strings \u0027cat\u0027 and \u0027dog\u0027 together, do not separate them!. Use format [\u0027cat\u0027, \u0027dog\u0027] for the list of strings."},{"role":"ASSISTANT","content":"Please provide result for processStrings ({  \"arg0\": [    \"cat\",    \"dog\"  ]})"},{"role":"USER","content":"Result  of processStrings ({  \"arg0\": [    \"cat\",    \"dog\"  ]}) is Success ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:22 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:22 GMT
Content-Length: 384
- body: {"model":"llama3","created_at":"2024-06-26T08:49:22.968524631Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The processed strings are: ['cat', 'dog'].\"}}"},"done":true,"total_duration":383903404,"load_duration":271722,"prompt_eval_count":63,"prompt_eval_duration":65591000,"eval_count":29,"eval_duration":314332000}
2024-06-26 11:49:22 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\nYou are a professional poet\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"sendAnEmail\",\n    \"description\": \"send the given content by email\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"Content to send\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Write a poem about Condominium Rives de marne. The poem should be 4 lines long. Then send this poem by email.\n"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:41 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:41 GMT
Transfer-Encoding: chunked
- body: {"model":"llama3","created_at":"2024-06-26T08:49:41.017630599Z","message":{"role":"assistant","content":"{\"name\": \"sendAnEmail\", \"arg0\": \"Subject: A Poem about Condominium Rives de marne\\n\\nHere is the poem:\\nCondominium Rives de marne, a sight to behold\\nWith waters flowing, and sunsets of gold\\nA place where life meets art, in harmony so fine\\nA haven for those seeking serenity's shrine\"}\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n"},"done":true,"total_duration":18034601187,"load_duration":250493,"prompt_eval_count":399,"prompt_eval_duration":255480000,"eval_count":1635,"eval_duration":17775965000}
2024-06-26 11:49:41 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "sendAnEmail", arguments = "null" } for memoryId default
2024-06-26 11:49:41 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"currentTemperature\",\n    \"description\": \"Give the temperature for a given city and unit\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"string\",\n        \"enum\": [\n          \"CELSIUS\",\n          \"fahrenheit\",\n          \"Kelvin\"\n        ]\n      },\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the temperature in Munich now, in kelvin?"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:41 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:41 GMT
Content-Length: 363
- body: {"model":"llama3","created_at":"2024-06-26T08:49:41.536027447Z","message":{"role":"assistant","content":"{\"name\": \"currentTemperature\", \n\"inputs\": {\"arg0\": \"Munich\", \"arg1\": \"Kelvin\"}}"},"done":true,"total_duration":500625565,"load_duration":424494,"prompt_eval_count":417,"prompt_eval_duration":142404000,"eval_count":29,"eval_duration":352422000}
2024-06-26 11:49:41 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "currentTemperature", arguments = "{
  "arg0": "Munich",
  "arg1": "Kelvin"
}" } for memoryId default
called currentTemperature(Munich, Kelvin)
2024-06-26 11:49:41 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 42
2024-06-26 11:49:41 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"currentTemperature\",\n    \"description\": \"Give the temperature for a given city and unit\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"string\",\n        \"enum\": [\n          \"CELSIUS\",\n          \"fahrenheit\",\n          \"Kelvin\"\n        ]\n      },\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the temperature in Munich now, in kelvin?"},{"role":"ASSISTANT","content":"Please provide result for currentTemperature ({  \"arg0\": \"Munich\",  \"arg1\": \"Kelvin\"})"},{"role":"USER","content":"Result  of currentTemperature ({  \"arg0\": \"Munich\",  \"arg1\": \"Kelvin\"}) is 42 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:41 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:41 GMT
Content-Length: 389
- body: {"model":"llama3","created_at":"2024-06-26T08:49:41.945917234Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The current temperature in Munich is 42 Kelvin.\"}}"},"done":true,"total_duration":407407195,"load_duration":227003,"prompt_eval_count":68,"prompt_eval_duration":66640000,"eval_count":28,"eval_duration":337218000}
2024-06-26 11:49:41 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"add\",\n    \"description\": \"Calculates the sum of two numbers\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"integer\"\n      },\n      \"arg0\": {\n        \"type\": \"integer\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"sqrt\",\n    \"description\": \"Calculates the square root of a number\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"number\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"stringLength\",\n    \"description\": \"Calculates the length of a string\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the square root of the sum of the numbers of letters in the words hello and world. "}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:43 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:43 GMT
Content-Length: 360
- body: {"model":"llama3","created_at":"2024-06-26T08:49:43.204632571Z","message":{"role":"assistant","content":"{ \"name\": \"add\", \n  \"inputs\": { \"arg0\": 5, \"arg1\": 5 } }\n\n\n\n  \n\n\n\n\n\n "},"done":true,"total_duration":1244537668,"load_duration":261382,"prompt_eval_count":496,"prompt_eval_duration":274522000,"eval_count":34,"eval_duration":966474000}
2024-06-26 11:49:43 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "add", arguments = "{
  "arg0": 5,
  "arg1": 5
}" } for memoryId default
2024-06-26 11:49:43 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 10
2024-06-26 11:49:43 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"add\",\n    \"description\": \"Calculates the sum of two numbers\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"integer\"\n      },\n      \"arg0\": {\n        \"type\": \"integer\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"sqrt\",\n    \"description\": \"Calculates the square root of a number\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"number\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"stringLength\",\n    \"description\": \"Calculates the length of a string\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the square root of the sum of the numbers of letters in the words hello and world. "},{"role":"ASSISTANT","content":"Please provide result for add ({  \"arg0\": 5,  \"arg1\": 5})"},{"role":"USER","content":"Result  of add ({  \"arg0\": 5,  \"arg1\": 5}) is 10 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:43 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:43 GMT
Content-Length: 316
- body: {"model":"llama3","created_at":"2024-06-26T08:49:43.545061284Z","message":{"role":"assistant","content":"{\"name\": \"sqrt\", \"inputs\": {\"arg0\": 10}}"},"done":true,"total_duration":337950350,"load_duration":249052,"prompt_eval_count":60,"prompt_eval_duration":153566000,"eval_count":17,"eval_duration":180200000}
2024-06-26 11:49:43 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "sqrt", arguments = "{
  "arg0": 10
}" } for memoryId default
2024-06-26 11:49:43 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 3.1622776601683795
2024-06-26 11:49:43 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"add\",\n    \"description\": \"Calculates the sum of two numbers\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"integer\"\n      },\n      \"arg0\": {\n        \"type\": \"integer\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"sqrt\",\n    \"description\": \"Calculates the square root of a number\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"number\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"stringLength\",\n    \"description\": \"Calculates the length of a string\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the square root of the sum of the numbers of letters in the words hello and world. "},{"role":"ASSISTANT","content":"Please provide result for add ({  \"arg0\": 5,  \"arg1\": 5})"},{"role":"USER","content":"Result  of add ({  \"arg0\": 5,  \"arg1\": 5}) is 10 ."},{"role":"ASSISTANT","content":"Please provide result for sqrt ({  \"arg0\": 10})"},{"role":"USER","content":"Result  of sqrt ({  \"arg0\": 10}) is 3.1622776601683795 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:44 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:44 GMT
Content-Length: 450
- body: {"model":"llama3","created_at":"2024-06-26T08:49:44.183748254Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The square root of the sum of the numbers of letters in the words 'hello' and 'world' is approximately 3.16.\"}}"},"done":true,"total_duration":636154087,"load_duration":294882,"prompt_eval_count":52,"prompt_eval_duration":97366000,"eval_count":46,"eval_duration":534399000}
2024-06-26 11:49:44 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"processIntegers\",\n    \"description\": \"Processes list of integers\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"List of integers to process\",\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"integer\"\n        }\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Process integers 1 and 2 together, do not separate them!"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:44 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:44 GMT
Content-Length: 334
- body: {"model":"llama3","created_at":"2024-06-26T08:49:44.633326136Z","message":{"role":"assistant","content":"{\"name\": \"processIntegers\", \n\"inputs\": {\"arg0\": [1, 2]}}"},"done":true,"total_duration":433511415,"load_duration":237952,"prompt_eval_count":369,"prompt_eval_duration":133164000,"eval_count":24,"eval_duration":297171000}
2024-06-26 11:49:44 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "processIntegers", arguments = "{
  "arg0": [
    1,
    2
  ]
}" } for memoryId default
called processIntegers([1, 2])
2024-06-26 11:49:44 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: Success
2024-06-26 11:49:44 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"processIntegers\",\n    \"description\": \"Processes list of integers\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"List of integers to process\",\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"integer\"\n        }\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"Process integers 1 and 2 together, do not separate them!"},{"role":"ASSISTANT","content":"Please provide result for processIntegers ({  \"arg0\": [    1,    2  ]})"},{"role":"USER","content":"Result  of processIntegers ({  \"arg0\": [    1,    2  ]}) is Success ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:49:45 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:45 GMT
Content-Length: 394
- body: {"model":"llama3","created_at":"2024-06-26T08:49:45.02301962Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The two integers processed successfully. What's next?\"}}"},"done":true,"total_duration":387290952,"load_duration":242742,"prompt_eval_count":63,"prompt_eval_duration":62477000,"eval_count":28,"eval_duration":321300000}
2024-06-26 11:49:45 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"USER","content":"What is the result of 1+1 ?"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"stream":false}
2024-06-26 11:49:45 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:49:45 GMT
Content-Length: 293
- body: {"model":"llama3","created_at":"2024-06-26T08:49:45.214975336Z","message":{"role":"assistant","content":"The result of 1+1 is... 2!"},"done":true,"total_duration":186998971,"load_duration":227302,"prompt_eval_count":19,"prompt_eval_duration":69490000,"eval_count":13,"eval_duration":116710000}
2024-06-26 11:49:45 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"getTransactionAmount\",\n    \"description\": \"returns amount of a given transaction\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"ID of a transaction\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What are the amounts of transactions T001 and T002? First call getTransactionAmount for T001, then for T002. Do not answer before you know all amounts!"}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:50:03 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:50:03 GMT
Transfer-Encoding: chunked
- body: {"model":"llama3","created_at":"2024-06-26T08:50:03.358354163Z","message":{"role":"assistant","content":"{\"name\": \"getTransactionAmount\", \"inputs\": {\"arg0\": \"T001\"}}\n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n"},"done":true,"total_duration":18138509882,"load_duration":293362,"prompt_eval_count":414,"prompt_eval_duration":207893000,"eval_count":1634,"eval_duration":17927104000}
2024-06-26 11:50:03 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "getTransactionAmount", arguments = "{
  "arg0": "T001"
}" } for memoryId default
called getTransactionAmount(T001)
2024-06-26 11:50:03 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 11.1
2024-06-26 11:50:03 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"getTransactionAmount\",\n    \"description\": \"returns amount of a given transaction\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"ID of a transaction\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What are the amounts of transactions T001 and T002? First call getTransactionAmount for T001, then for T002. Do not answer before you know all amounts!"},{"role":"ASSISTANT","content":"Please provide result for getTransactionAmount ({  \"arg0\": \"T001\"})"},{"role":"USER","content":"Result  of getTransactionAmount ({  \"arg0\": \"T001\"}) is 11.1 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:50:03 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:50:03 GMT
Content-Length: 339
- body: {"model":"llama3","created_at":"2024-06-26T08:50:03.873378822Z","message":{"role":"assistant","content":"{\"name\": \"getTransactionAmount\", \"inputs\": {\"arg0\": \"T002\"}}"},"done":true,"total_duration":511934111,"load_duration":290423,"prompt_eval_count":442,"prompt_eval_duration":240238000,"eval_count":20,"eval_duration":267609000}
2024-06-26 11:50:03 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "getTransactionAmount", arguments = "{
  "arg0": "T002"
}" } for memoryId default
called getTransactionAmount(T002)
2024-06-26 11:50:03 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 22.2
2024-06-26 11:50:03 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"getTransactionAmount\",\n    \"description\": \"returns amount of a given transaction\",\n    \"properties\": {\n      \"arg0\": {\n        \"description\": \"ID of a transaction\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What are the amounts of transactions T001 and T002? First call getTransactionAmount for T001, then for T002. Do not answer before you know all amounts!"},{"role":"ASSISTANT","content":"Please provide result for getTransactionAmount ({  \"arg0\": \"T001\"})"},{"role":"USER","content":"Result  of getTransactionAmount ({  \"arg0\": \"T001\"}) is 11.1 ."},{"role":"ASSISTANT","content":"Please provide result for getTransactionAmount ({  \"arg0\": \"T002\"})"},{"role":"USER","content":"Result  of getTransactionAmount ({  \"arg0\": \"T002\"}) is 22.2 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:50:04 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:50:04 GMT
Content-Length: 417
- body: {"model":"llama3","created_at":"2024-06-26T08:50:04.433065152Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \n\"inputs\": {\"response\": \"The amounts of transactions T001 and T002 are 11.1 and 22.2 respectively.\"}}"},"done":true,"total_duration":556991666,"load_duration":310873,"prompt_eval_count":52,"prompt_eval_duration":64378000,"eval_count":40,"eval_duration":488261000}
2024-06-26 11:50:04 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"add\",\n    \"description\": \"Calculates the sum of two numbers\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"integer\"\n      },\n      \"arg0\": {\n        \"type\": \"integer\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"sqrt\",\n    \"description\": \"Calculates the square root of a number\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"number\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"stringLength\",\n    \"description\": \"Calculates the length of a string\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the square root of the sum of the numbers of letters in the words hello and world. "}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:50:05 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:50:05 GMT
Content-Length: 360
- body: {"model":"llama3","created_at":"2024-06-26T08:50:05.705897342Z","message":{"role":"assistant","content":"{ \"name\": \"add\", \n  \"inputs\": { \"arg0\": 5, \"arg1\": 5 } }\n\n\n\n  \n\n\n\n\n\n "},"done":true,"total_duration":1267361720,"load_duration":249152,"prompt_eval_count":496,"prompt_eval_duration":274066000,"eval_count":34,"eval_duration":990095000}
2024-06-26 11:50:05 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "add", arguments = "{
  "arg0": 5,
  "arg1": 5
}" } for memoryId default
2024-06-26 11:50:05 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 10
2024-06-26 11:50:05 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"add\",\n    \"description\": \"Calculates the sum of two numbers\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"integer\"\n      },\n      \"arg0\": {\n        \"type\": \"integer\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"sqrt\",\n    \"description\": \"Calculates the square root of a number\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"number\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"stringLength\",\n    \"description\": \"Calculates the length of a string\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the square root of the sum of the numbers of letters in the words hello and world. "},{"role":"ASSISTANT","content":"Please provide result for add ({  \"arg0\": 5,  \"arg1\": 5})"},{"role":"USER","content":"Result  of add ({  \"arg0\": 5,  \"arg1\": 5}) is 10 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:50:06 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:50:06 GMT
Content-Length: 316
- body: {"model":"llama3","created_at":"2024-06-26T08:50:06.100965052Z","message":{"role":"assistant","content":"{\"name\": \"sqrt\", \"inputs\": {\"arg0\": 10}}"},"done":true,"total_duration":392619167,"load_duration":429484,"prompt_eval_count":60,"prompt_eval_duration":152737000,"eval_count":17,"eval_duration":234865000}
2024-06-26 11:50:06 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: About to execute ToolExecutionRequest { id = null, name = "sqrt", arguments = "{
  "arg0": 10
}" } for memoryId default
2024-06-26 11:50:06 [main] dev.langchain4j.agent.tool.DefaultToolExecutor.execute()
DEBUG: Tool execution result: 3.1622776601683795
2024-06-26 11:50:06 [main] dev.langchain4j.model.ollama.OllamaRequestLoggingInterceptor.log()
DEBUG: Request:
- method: POST
- url: http://localhost:32769/api/chat
- headers: 
- body: {"model":"llama3","messages":[{"role":"SYSTEM","content":"You are a helpful AI assistant responding to user requests.\n\nYou have access to the following tools, and only those tools:\n[\n  {\n    \"name\": \"add\",\n    \"description\": \"Calculates the sum of two numbers\",\n    \"properties\": {\n      \"arg1\": {\n        \"type\": \"integer\"\n      },\n      \"arg0\": {\n        \"type\": \"integer\"\n      }\n    },\n    \"required\": [\n      \"arg0\",\n      \"arg1\"\n    ]\n  },\n  {\n    \"name\": \"sqrt\",\n    \"description\": \"Calculates the square root of a number\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"number\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"stringLength\",\n    \"description\": \"Calculates the length of a string\",\n    \"properties\": {\n      \"arg0\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"arg0\"\n    ]\n  },\n  {\n    \"name\": \"__conversational_response\",\n    \"description\": \"Respond conversationally if no other tools should be called for a given query.\",\n    \"properties\": {\n      \"response\": {\n        \"description\": \"Conversational response to the user.\",\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\n      \"response\"\n    ]\n  }\n]\n\nBreak down complex request into sequential unitary tool calls.\nUse previous messages to avoid asking twice the same tool and select the most appropriate tool.\n\nYou can not select the same tool with same properties twice.\n, use \"__conversational_response\" tool.\nWhen you have gathered enough information or if tool have succeed, use \"__conversational_response\" tool.\n    Ex: {\"name\": \"__conversational_response\", inputs: {\"response\": \"The amount of transaction ...\"} }\n\nRespond only with a JSON object containing required fields:\n    - \"name\": \u003crequired selected tool name\u003e\n    - \"inputs\": \u003crequired selected tool properties, matching the tool\u0027s JSON schema.\n        Do not use tool definition in inputs. Ex: { \"arg0\": 5} \u003e\n\nIf the user request does not imply a response, respond with what have been done.\n"},{"role":"USER","content":"What is the square root of the sum of the numbers of letters in the words hello and world. "},{"role":"ASSISTANT","content":"Please provide result for add ({  \"arg0\": 5,  \"arg1\": 5})"},{"role":"USER","content":"Result  of add ({  \"arg0\": 5,  \"arg1\": 5}) is 10 ."},{"role":"ASSISTANT","content":"Please provide result for sqrt ({  \"arg0\": 10})"},{"role":"USER","content":"Result  of sqrt ({  \"arg0\": 10}) is 3.1622776601683795 ."}],"options":{"temperature":0.0,"num_predict":2048,"num_ctx":2048},"format":"json","stream":false}
2024-06-26 11:50:06 [main] dev.langchain4j.model.ollama.OllamaResponseLoggingInterceptor.log()
DEBUG: Response:
- status code: 200
- headers: Content-Type: application/json; charset=utf-8
Date: Wed, 26 Jun 2024 08:50:06 GMT
Content-Length: 450
- body: {"model":"llama3","created_at":"2024-06-26T08:50:06.710629033Z","message":{"role":"assistant","content":"{\"name\": \"__conversational_response\", \"inputs\": {\"response\": \"The square root of the sum of the numbers of letters in the words 'hello' and 'world' is approximately 3.16.\"}}"},"done":true,"total_duration":607463331,"load_duration":279073,"prompt_eval_count":52,"prompt_eval_duration":68096000,"eval_count":46,"eval_duration":534802000}

Do you have specific environment variables for java , like -parameters or others ?

Nope

@geoand
Copy link
Collaborator

geoand commented Jun 26, 2024

In any case, I would really really like to have a real standalone example so I can run it against an Ollama server

@humcqc
Copy link
Contributor Author

humcqc commented Jun 26, 2024

In any case, I would really really like to have a real standalone example so I can run it against an Ollama server

I've added a simple test:
langchain4j/langchain4j@638871b#diff-fb50d38a48b1c6cc43ff4d358207e38b2b1164b59868bcc5f280f47a9a041586

I will update the quarkus PR to use the langchain 0.32.0-SNAPSHOT where there is a simple quarkus test doing the same

@geoand
Copy link
Collaborator

geoand commented Jun 27, 2024

🙏🏼

@geoand
Copy link
Collaborator

geoand commented Jun 27, 2024

I've added a simple test:
langchain4j/langchain4j@638871b#diff-fb50d38a48b1c6cc43ff4d358207e38b2b1164b59868bcc5f280f47a9a041586

I actually want some code that I can use not a test environment - just a sample ollama application that involved tools that I can run against Ollama manually.
The reason I want this is so I can go through the calls and see what's going on

@humcqc
Copy link
Contributor Author

humcqc commented Jun 27, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants