Skip to content

Commit 6e597c7

Browse files
author
kqlio67
committed
Add DuckDuckGo provider, refine Copilot, and update docs and Docker usage.
- Added DuckDuckGo provider - Refined Copilot logic - Updated Docker usage docs - Introduced new hidden responses - Minor fixes and improvements
2 parents 8375109 + 21268f0 commit 6e597c7

File tree

15 files changed

+187
-83
lines changed

15 files changed

+187
-83
lines changed

README.md

Lines changed: 20 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -97,31 +97,31 @@ Is your site on this repository and you want to take it down? Send an email to t
9797
1. **Install Docker:** [Download and install Docker](https://docs.docker.com/get-docker/).
9898
2. **Set Up Directories:** Before running the container, make sure the necessary data directories exist or can be created. For example, you can create and set ownership on these directories by running:
9999
```bash
100-
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images
101-
sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_images
100+
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images
101+
sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_images
102102
```
103103
3. **Run the Docker Container:** Use the following commands to pull the latest image and start the container (Only x64):
104104
```bash
105-
docker pull hlohaus789/g4f
106-
docker run -p 8080:8080 -p 7900:7900 \
107-
--shm-size="2g" \
108-
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
109-
-v ${PWD}/generated_images:/app/generated_images \
110-
hlohaus789/g4f:latest
105+
docker pull hlohaus789/g4f
106+
docker run -p 8080:8080 -p 7900:7900 \
107+
--shm-size="2g" \
108+
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
109+
-v ${PWD}/generated_images:/app/generated_images \
110+
hlohaus789/g4f:latest
111111
```
112112

113113
4. **Running the Slim Docker Image:** And use the following commands to run the Slim Docker image. This command also updates the `g4f` package at startup and installs any additional dependencies: (x64 and arm64)
114114
```bash
115-
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images
116-
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_images
117-
docker run \
118-
-p 1337:1337 \
119-
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
120-
-v ${PWD}/generated_images:/app/generated_images \
121-
hlohaus789/g4f:latest-slim \
122-
rm -r -f /app/g4f/ \
123-
&& pip install -U g4f[slim] \
124-
&& python -m g4f --debug
115+
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images
116+
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_images
117+
docker run \
118+
-p 1337:1337 \
119+
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
120+
-v ${PWD}/generated_images:/app/generated_images \
121+
hlohaus789/g4f:latest-slim \
122+
rm -r -f /app/g4f/ \
123+
&& pip install -U g4f[slim] \
124+
&& python -m g4f --debug
125125
```
126126

127127
5. **Access the Client Interface:**
@@ -248,7 +248,8 @@ Run the Web UI on your smartphone for easy access on the go. Check out the dedic
248248
- **File API from G4F:** [/docs/file](docs/file.md)
249249
- **PydanticAI and LangChain Integration for G4F:** [/docs/pydantic_ai](docs/pydantic_ai.md)
250250
- **Legacy API with python modules:** [/docs/legacy](docs/legacy.md)
251-
251+
- **G4F - Media Documentation** [/docs/media](/docs/media.md) *(New)*
252+
252253
---
253254

254255
## 🔗 Powered by gpt4free

docs/authentication.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,7 @@ asyncio.run(main())
117117
### **Multiple Providers with API Keys**
118118
```python
119119
import os
120+
import g4f.Provider
120121
from g4f.client import Client
121122

122123
# Using environment variables
@@ -126,10 +127,10 @@ providers = {
126127
}
127128

128129
for provider_name, api_key in providers.items():
129-
client = Client(provider=f"g4f.Provider.{provider_name}", api_key=api_key)
130+
client = Client(provider=getattr(g4f.Provider, provider_name), api_key=api_key)
130131
response = client.chat.completions.create(
131132
model="claude-3.5-sonnet",
132-
messages=[{"role": "user", "content": f"Hello from {provider_name}!"}]
133+
messages=[{"role": "user", "content": f"Hello to {provider_name}!"}]
133134
)
134135
print(f"{provider_name}: {response.choices[0].message.content}")
135136
```
@@ -144,16 +145,22 @@ for provider_name, api_key in providers.items():
144145
- Firefox: **Storage****Cookies**
145146

146147
```python
148+
from g4f.client import Client
147149
from g4f.Provider import Gemini
148150

149-
# Initialize with cookies
151+
# Using with cookies
150152
client = Client(
151153
provider=Gemini,
154+
)
155+
response = client.chat.completions.create(
156+
model="", # Default model
157+
messages="Hello Google",
152158
cookies={
153159
"__Secure-1PSID": "your_cookie_value_here",
154-
"__Secure-1PSIDTS": "timestamp_value_here"
160+
"__Secure-1PSIDTS": "your_cookie_value_here"
155161
}
156162
)
163+
print(f"Gemini: {response.choices[0].message.content}")
157164
```
158165

159166
---

docs/gui.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,9 +69,9 @@ The G4F GUI is a self-contained, user-friendly interface designed for interactin
6969
- **Basic Authentication**
7070
You can set a password for Basic Authentication using the `--g4f-api-key` argument when starting the web server.
7171

72-
### 9. **Continue Button (ChatGPT & HuggingChat)**
72+
### 9. **Continue Button**
7373
- **Automatic Detection of Truncated Responses**
74-
When using **ChatGPT** or **HuggingChat** providers, responses may occasionally be cut off or truncated.
74+
When using providers, responses may occasionally be cut off or truncated.
7575
- **Continue Button**
7676
If the GUI detects that the response ended abruptly, a **Continue** button appears directly below the truncated message. Clicking this button sends a follow-up request to the same provider and model, retrieving the rest of the message.
7777
- **Seamless Conversation Flow**
@@ -154,7 +154,7 @@ http://localhost:8080/chat/
154154
- **Text/Code:** The generated response appears in the conversation window.
155155
- **Images:** Generated images are displayed as thumbnails. Click on any thumbnail to view it in full size within the lightbox.
156156

157-
5. **Continue Button (ChatGPT & HuggingChat)**
157+
5. **Continue Button**
158158
- If a response is truncated, a **Continue** button will appear under the last message. Clicking it asks the same provider to continue the response from where it ended.
159159

160160
6. **Manage Conversations**

docs/media.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ from g4f.Provider import HuggingFaceMedia
175175
async def main():
176176
client = AsyncClient(
177177
provider=HuggingFaceMedia,
178-
api_key="hf_***" # Your API key here
178+
api_key=os.getenv("HF_TOKEN") # Your API key here
179179
)
180180

181181
video_models = client.models.get_video()
@@ -214,7 +214,7 @@ from g4f.Provider import HuggingFaceMedia
214214
async def main():
215215
client = AsyncClient(
216216
provider=HuggingFaceMedia,
217-
api_key=os.getenv("HUGGINGFACE_API_KEY") # Your API key here
217+
api_key=os.getenv("HF_TOKEN") # Your API key here
218218
)
219219

220220
video_models = client.models.get_video()

g4f/Provider/Cloudflare.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,9 +83,9 @@ async def create_async_generator(
8383
pass
8484
data = {
8585
"messages": [{
86-
"role":"user",
86+
**message,
8787
"content": message["content"] if isinstance(message["content"], str) else "",
88-
"parts": [{"type":"text", "text":message["content"]}] if isinstance(message["content"], str) else message["content"]} for message in messages],
88+
"parts": [{"type":"text", "text":message["content"]}] if isinstance(message["content"], str) else message} for message in messages],
8989
"lora": None,
9090
"model": model,
9191
"max_tokens": max_tokens,

g4f/Provider/Copilot.py

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
from ..typing import CreateResult, Messages, MediaListType
2525
from ..errors import MissingRequirementsError, NoValidHarFileError, MissingAuthError
2626
from ..requests.raise_for_status import raise_for_status
27-
from ..providers.response import BaseConversation, JsonConversation, RequestLogin, ImageResponse
27+
from ..providers.response import BaseConversation, JsonConversation, RequestLogin, ImageResponse, FinishReason, SuggestedFollowups
2828
from ..providers.asyncio import get_running_loop
2929
from ..tools.media import merge_media
3030
from ..requests import get_nodriver
@@ -46,10 +46,13 @@ class Copilot(AbstractProvider, ProviderModelMixin):
4646
supports_stream = True
4747

4848
default_model = "Copilot"
49-
models = [default_model]
49+
models = [default_model, "Think Deeper"]
5050
model_aliases = {
5151
"gpt-4": default_model,
52-
"o1": default_model,
52+
"gpt-4o": default_model,
53+
"o1": "Think Deeper",
54+
"reasoning": "Think Deeper",
55+
"dall-e-3": default_model
5356
}
5457

5558
websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2"
@@ -75,10 +78,10 @@ def create_completion(
7578
) -> CreateResult:
7679
if not has_curl_cffi:
7780
raise MissingRequirementsError('Install or update "curl_cffi" package | pip install -U curl_cffi')
78-
81+
model = cls.get_model(model)
7982
websocket_url = cls.websocket_url
8083
headers = None
81-
if cls.needs_auth or media is not None:
84+
if cls._access_token:
8285
if api_key is not None:
8386
cls._access_token = api_key
8487
if cls._access_token is None:
@@ -163,14 +166,15 @@ def create_completion(
163166
# "token": clarity_token,
164167
# "method":"clarity"
165168
# }).encode(), CurlWsFlag.TEXT)
169+
wss.send(json.dumps({"event":"setOptions","supportedCards":["weather","local","image","sports","video","ads","finance"],"ads":{"supportedTypes":["multimedia","product","tourActivity","propertyPromotion","text"]}}));
166170
wss.send(json.dumps({
167171
"event": "send",
168172
"conversationId": conversation_id,
169173
"content": [*uploaded_images, {
170174
"type": "text",
171175
"text": prompt,
172176
}],
173-
"mode": "chat"
177+
"mode": "reasoning" if "Think" in model else "chat",
174178
}).encode(), CurlWsFlag.TEXT)
175179

176180
is_started = False
@@ -193,6 +197,10 @@ def create_completion(
193197
elif msg.get("event") == "imageGenerated":
194198
yield ImageResponse(msg.get("url"), image_prompt, {"preview": msg.get("thumbnailUrl")})
195199
elif msg.get("event") == "done":
200+
yield FinishReason("stop")
201+
break
202+
elif msg.get("event") == "suggestedFollowups":
203+
yield SuggestedFollowups(msg.get("suggestions"))
196204
break
197205
elif msg.get("event") == "replaceText":
198206
yield msg.get("text")

g4f/Provider/DuckDuckGo.py

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
from __future__ import annotations
2+
3+
import asyncio
4+
5+
try:
6+
from duckduckgo_search import DDGS
7+
from duckduckgo_search.exceptions import DuckDuckGoSearchException, RatelimitException, ConversationLimitException
8+
has_requirements = True
9+
except ImportError:
10+
has_requirements = False
11+
try:
12+
import nodriver
13+
has_nodriver = True
14+
except ImportError:
15+
has_nodriver = False
16+
17+
from ..typing import AsyncResult, Messages
18+
from ..requests import get_nodriver
19+
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
20+
from .helper import get_last_user_message
21+
22+
class DuckDuckGo(AsyncGeneratorProvider, ProviderModelMixin):
23+
label = "Duck.ai (duckduckgo_search)"
24+
url = "https://duckduckgo.com/aichat"
25+
api_base = "https://duckduckgo.com/duckchat/v1/"
26+
27+
working = False
28+
supports_stream = True
29+
supports_system_message = True
30+
supports_message_history = True
31+
32+
default_model = "gpt-4o-mini"
33+
models = [default_model, "meta-llama/Llama-3.3-70B-Instruct-Turbo", "claude-3-haiku-20240307", "o3-mini", "mistralai/Mistral-Small-24B-Instruct-2501"]
34+
35+
ddgs: DDGS = None
36+
37+
model_aliases = {
38+
"gpt-4": "gpt-4o-mini",
39+
"llama-3.3-70b": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
40+
"claude-3-haiku": "claude-3-haiku-20240307",
41+
"mixtral-small-24b": "mistralai/Mistral-Small-24B-Instruct-2501",
42+
}
43+
44+
@classmethod
45+
async def create_async_generator(
46+
cls,
47+
model: str,
48+
messages: Messages,
49+
proxy: str = None,
50+
timeout: int = 60,
51+
**kwargs
52+
) -> AsyncResult:
53+
if not has_requirements:
54+
raise ImportError("duckduckgo_search is not installed. Install it with `pip install duckduckgo-search`.")
55+
if cls.ddgs is None:
56+
cls.ddgs = DDGS(proxy=proxy, timeout=timeout)
57+
if has_nodriver:
58+
await cls.nodriver_auth(proxy=proxy)
59+
model = cls.get_model(model)
60+
for chunk in cls.ddgs.chat_yield(get_last_user_message(messages), model, timeout):
61+
yield chunk
62+
63+
@classmethod
64+
async def nodriver_auth(cls, proxy: str = None):
65+
browser, stop_browser = await get_nodriver(proxy=proxy)
66+
try:
67+
page = browser.main_tab
68+
def on_request(event: nodriver.cdp.network.RequestWillBeSent, page=None):
69+
if cls.api_base in event.request.url:
70+
if "X-Vqd-4" in event.request.headers:
71+
cls.ddgs._chat_vqd = event.request.headers["X-Vqd-4"]
72+
if "X-Vqd-Hash-1" in event.request.headers:
73+
cls.ddgs._chat_vqd_hash = event.request.headers["X-Vqd-Hash-1"]
74+
if "F-Fe-Version" in event.request.headers:
75+
cls.ddgs._chat_xfe = event.request.headers["F-Fe-Version" ]
76+
await page.send(nodriver.cdp.network.enable())
77+
page.add_handler(nodriver.cdp.network.RequestWillBeSent, on_request)
78+
page = await browser.get(cls.url)
79+
while True:
80+
if cls.ddgs._chat_vqd:
81+
break
82+
await asyncio.sleep(1)
83+
await page.close()
84+
finally:
85+
stop_browser()

g4f/Provider/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@
4040
from .Copilot import Copilot
4141
from .DDG import DDG
4242
from .DeepInfraChat import DeepInfraChat
43+
from .DuckDuckGo import DuckDuckGo
4344
from .Dynaspark import Dynaspark
4445
except ImportError as e:
4546
debug.error("Providers not loaded (A-D):", e)

g4f/Provider/needs_auth/CopilotAccount.py

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,6 @@ class CopilotAccount(AsyncAuthedProvider, Copilot):
1919
parent = "Copilot"
2020
default_model = "Copilot"
2121
default_vision_model = default_model
22-
models = [default_model]
23-
image_models = models
24-
model_aliases = {
25-
"dall-e-3": default_model
26-
}
2722

2823
@classmethod
2924
async def on_auth_async(cls, proxy: str = None, **kwargs) -> AsyncIterator:

g4f/api/stubs.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ class ImageGenerationConfig(BaseModel):
6868
aspect_ratio: Optional[str] = None
6969
n: Optional[int] = None
7070
negative_prompt: Optional[str] = None
71+
resolution: Optional[str] = None
7172

7273
class ProviderResponseModel(BaseModel):
7374
id: str

0 commit comments

Comments
 (0)