Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameters on raw HTTP Target #824

Open
herick-santos-ey opened this issue Mar 24, 2025 · 5 comments
Open

Parameters on raw HTTP Target #824

herick-santos-ey opened this issue Mar 24, 2025 · 5 comments
Labels
bug Something isn't working question Further information is requested

Comments

@herick-santos-ey
Copy link

herick-santos-ey commented Mar 24, 2025

Hi,

I was trying to make a connection to the http prompt target but I was unsuccessful, because I noticed that the raw_http_request did not take my header field (user-uuid) which was placed differently than passed in the documentation, for this case, I am not using the api-key to access my LLM.

I leave here a tip for you in the future to allow different fields to be passed as header of the request.

I tried to pass the parameter directly in the OpenAIChatTarget call but I was also unsuccessful.

The api under the url is working because I've tested it in other applications (swagger and postman) and using the requests library as well.

At the end of it all, the most I could get was a 404 error. of not found for not being able to pass this parameter.

Error:

HTTPStatusError: Client error '404 Not Found' for url 'http://acs-assist-benchmark-llm.nia.hm.bb.com.br/chatCompletion'

raw request:

raw_http_request = f"""
POST {url}
Content-Type: application/json
user-uuid: {user_uuid}

{{
    "data": {{
        "solicitacao": {{
            "messages": [
                {{
                        "role": "user",
                    "content": "{{prompt}}"
                }}
            ]
        }},
        "config": {{
            "temperature": 0.2,
            "max_tokens": 256,
            "top_p": 0.95
        }}
    }}
}}

"""

calling the headers directly

red_teaming_chat = OpenAIChatTarget(api_version=None, headers={"user-uuid": "some_number"})

scorer = SelfAskTrueFalseScorer(
chat_target=OpenAIChatTarget(api_version=None, headers={"user-uuid": "some_number"}),
true_false_question_path=Path("classifiers/check_fraud_classifier.yaml"),
)

@romanlutz
Copy link
Contributor

@herick-santos-ey I'm a little confused. Are you using OpenAIChatTarget or HTTPTarget?

OpenAIChatTarget does allow for headers, so I'm not entirely sure what you're asking. Perhaps if you have a comparison of what you send manually (without PyRIT) vs. what PyRIT sends then we can compare and see if it can be addressed. Thanks!

@romanlutz romanlutz added the question Further information is requested label Mar 25, 2025
@herick-santos-ey
Copy link
Author

herick-santos-ey commented Mar 25, 2025

@romanlutz thanks for the quick response, as doc link, I am using the HTTP target to pass my LLM target.

The use of OpenAIChatTarget, as per the documentation, is being used to define the red_teaming_chat and the scorer.

Here is the link to the documentation.

https://azure.github.io/PyRIT/code/targets/7_http_target.html

Below is my code without pyrit:

import requests
import json
import logging
from pathlib import Path

# Logging set to lower levels will print a lot more diagnostic information about what's happening.
logging.basicConfig(level=logging.DEBUG)

url = "http://acs-assist-benchmark-llm.nia.servicos.bb.com.br/chatCompletion"

payload = json.dumps({
  "data": {
    "solicitacao": {
      "messages": [
        {
          "role": "user",
          "content": "Olá"
        }
      ]
    },
    "config": {
      "temperature": 0.2,
      "max_tokens": 256,
      "top_p": 0.95
    }
  }
})
headers = {
  'user-uuid': 'some-number',
  'Content-Type': 'application/json',
}

response = requests.request("POST", url, headers=headers, data=payload)

response_data = response.json()

response_data = response_data['data']

print(response_data)

Code with pyrit

import os

os.environ["OPENAI_CHAT_ENDPOINT"] = "http://acs-assist-benchmark-llm.nia.hm.bb.com.br/chatCompletion"

url = os.environ["OPENAI_CHAT_ENDPOINT"]
user_uuid = "some-number"

from pyrit.common import IN_MEMORY, initialize_pyrit
from pyrit.orchestrator import PromptSendingOrchestrator, RedTeamingOrchestrator
from pyrit.prompt_converter import SearchReplaceConverter
from pyrit.prompt_target import (
    HTTPTarget,
    OpenAIChatTarget,
    get_http_target_json_response_callback_function,
    get_http_target_regex_matching_callback_function,
)
from pyrit.score import SelfAskTrueFalseScorer

initialize_pyrit(memory_db_type=IN_MEMORY)

prompt = "Olá"  # Replace with the actual prompt message

raw_http_request = f"""
    POST {url}
    Content-Type: application/json
    user-uuid: {user_uuid}
    
    {{
        "data": {{
            "solicitacao": {{
                "messages": [
                    {{
                            "role": "user",
                        "content": "{{PROMPT}}"
                    }}
                ]
            }},
            "config": {{
                "temperature": 0.2,
                "max_tokens": 256,
                "top_p": 0.95
            }}
        }}
    }}
"""

print(raw_http_request)

import logging
from pathlib import Path

# Logging set to lower levels will print a lot more diagnostic information about what's happening.
logging.basicConfig(level=logging.DEBUG)

conversation_objective = "Olá"

# The red_teaming_chat is used to generate prompts that are sent to the target.
red_teaming_chat = OpenAIChatTarget(api_version=None, headers={"user-uuid": "some-number"})

scorer = SelfAskTrueFalseScorer(
    chat_target=OpenAIChatTarget(api_version=None, headers={"user-uuid": "some-number"}),
    true_false_question_path=Path("classifiers/check_fraud_classifier.yaml"),
)

http_prompt_target = HTTPTarget(
    http_request=raw_http_request, prompt_regex_string="{PROMPT}", callback_function=parsing_function
)

# Note, like above, a converter is used to format the prompt to be json safe without new lines/carriage returns, etc
red_teaming_orchestrator = RedTeamingOrchestrator(
    adversarial_chat=red_teaming_chat,
    objective_target=http_prompt_target,
    objective_scorer=scorer,
    verbose=True,
    prompt_converters=[SearchReplaceConverter(pattern=r"(?! )\s", replace="")],
)

result = await red_teaming_orchestrator.run_attack_async(objective=conversation_objective)  # type: ignore
await result.print_conversation_async()  # type: ignore'

What I'm trying to say is that probably is not reading the parameter passed in the raw request user-uuid and then my request is not working, is this correct to happens?

@romanlutz
Copy link
Contributor

OK, so to rephrase, you're saying the HTTPTarget isn't forwarding the headers on your request, correct?

Have you tried checking whether the following line (copied from HTTPTarget) has the correct header dict?

        header_dict, http_body, url, http_method, http_version = self.parse_raw_http_request(http_request_w_prompt)

That's the first thing I'd try. If you don't have it set up to debug this I can give it a look sometime this week.

@herick-santos-ey
Copy link
Author

herick-santos-ey commented Mar 25, 2025

Q: OK, so to rephrase, you're saying the HTTPTarget isn't forwarding the headers on your request, correct?

A: Yes, thats it.

Q: aHave you tried checking whether the following line (copied from HTTPTarget) has the correct header dict?

A: I'm using the pyrit in a Jupyter Notebook and I'm installed the lib from the pip, the current version is 0.70, so this line is in the file.

@romanlutz
Copy link
Contributor

OK, yes that's what I thought. Someone (probably me) will have to run this with editable installation and print out the headers to debug.

@romanlutz romanlutz added the bug Something isn't working label Mar 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants