-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add workflow png and json #1127
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThe document Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md (1)
77-77
: Consider enhancing the documentation with example outputs.
To help users better understand the impact of quantization on image quality, consider:
- Adding example output images for each optimization level (baseline, OneDiff, OneDiff Quant)
- Consistently formatting performance improvement percentages (some use parentheses, others use %)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
- onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md (4 hunks)
🔇 Additional comments (1)
onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md (1)
91-93
: Verify the existence of referenced SD1.5 workflow files.
Let's ensure the referenced workflow files exist at the specified paths.
#!/bin/bash
# Check if the workflow files exist
echo "Checking SD1.5 workflow files..."
for file in workflows/sd15-{baseline,onediff,quant}.json comfyui_workflow_images/sd15-{baseline,onediff,quant}.png; do
if [ -f "../$file" ]; then
echo "✓ Found: $file"
else
echo "✗ Missing: $file"
fi
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (11)
onediff_comfy_nodes/workflows/sdxl-baseline.json (4)
2-31
: Consider using a more generic seed value and adding documentation.
The KSampler configuration is functionally correct, but consider:
- Using a simpler seed value (e.g., 42) for better reproducibility in examples
- Adding comments to document the chosen sampler settings (euler/normal) and their implications
41-51
: Review resolution and batch size settings for SDXL.
The current configuration might not be optimal for SDXL:
- SDXL is designed for 1024x1024 resolution, current 512x512 might not showcase its full capability
- Batch size of 4 might be too demanding for some GPUs
Consider adjusting the configuration based on the target hardware requirements and SDXL's optimal settings.
52-77
: Clean up prompt formatting and consider using generic examples.
The CLIP text encode configuration has minor issues:
- The positive prompt contains consecutive commas which might affect consistency
- Consider using simpler, more generic prompts for a baseline example
- "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+ "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle"
94-106
: Consider more specific filename prefix and output path configuration.
The current save configuration might lead to file conflicts:
- Generic "ComfyUI" prefix might cause overwrites in repeated runs
- Consider adding model name (SDXL) to the prefix for better organization
- "filename_prefix": "ComfyUI",
+ "filename_prefix": "ComfyUI_SDXL",
onediff_comfy_nodes/workflows/sd15-baseline.json (2)
2-31
: Document sampling configuration rationale.
The KSampler configuration uses:
- Euler sampler with normal scheduler
- 20 steps, CFG=8
- Fixed seed: 1004586347945964
Consider:
- Documenting why these specific parameters were chosen
- Adding performance benchmarks for this configuration
- Making the seed configurable or random for production use
1-107
: Workflow structure is sound but needs more flexibility.
The workflow successfully implements a basic SD1.5 pipeline with all necessary components. However, it could benefit from:
- Configuration externalization
- Documentation of node relationships and data flow
- Error handling for missing models or invalid inputs
Consider creating a companion configuration file to make the workflow more maintainable and reusable across different environments.
onediff_comfy_nodes/workflows/sd15-onediff.json (2)
43-68
: Consider using more generic example prompts.
The current positive prompt is very specific to a particular scene. For a workflow template, consider using a more generic example that demonstrates the prompt structure.
- "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+ "text": "a beautiful landscape, high quality, detailed",
85-97
: Consider a more specific filename prefix.
Using "ComfyUI" as a prefix might lead to file conflicts when running multiple workflows. Consider using a more specific prefix like "sd15_onediff".
- "filename_prefix": "ComfyUI",
+ "filename_prefix": "sd15_onediff",
onediff_comfy_nodes/workflows/sdxl-quant.json (2)
43-68
: Clean up prompt formatting.
The positive prompt contains redundant commas which might affect prompt parsing:
- "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+ "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle"
85-97
: Use a more descriptive filename prefix.
The current generic prefix "ComfyUI" might lead to file overwrites. Consider including model and configuration details:
- "filename_prefix": "ComfyUI",
+ "filename_prefix": "sdxl_quant",
onediff_comfy_nodes/workflows/sd15-quant.json (1)
69-97
: Consider a more descriptive filename prefix.
The image processing pipeline is correctly configured, but the generic "ComfyUI" prefix might make it difficult to distinguish between different workflow outputs.
Consider using a more descriptive prefix like "sd15_quant" to indicate the model and quantization status.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (6)
onediff_comfy_nodes/comfyui_workflow_images/sd15-baseline.png
is excluded by!**/*.png
onediff_comfy_nodes/comfyui_workflow_images/sd15-onediff.png
is excluded by!**/*.png
onediff_comfy_nodes/comfyui_workflow_images/sd15-quant.png
is excluded by!**/*.png
onediff_comfy_nodes/comfyui_workflow_images/sdxl-baseline.png
is excluded by!**/*.png
onediff_comfy_nodes/comfyui_workflow_images/sdxl-onediff.png
is excluded by!**/*.png
onediff_comfy_nodes/comfyui_workflow_images/sdxl-quant.png
is excluded by!**/*.png
📒 Files selected for processing (6)
- onediff_comfy_nodes/workflows/sd15-baseline.json (1 hunks)
- onediff_comfy_nodes/workflows/sd15-onediff.json (1 hunks)
- onediff_comfy_nodes/workflows/sd15-quant.json (1 hunks)
- onediff_comfy_nodes/workflows/sdxl-baseline.json (1 hunks)
- onediff_comfy_nodes/workflows/sdxl-onediff.json (1 hunks)
- onediff_comfy_nodes/workflows/sdxl-quant.json (1 hunks)
🔇 Additional comments (14)
onediff_comfy_nodes/workflows/sdxl-baseline.json (2)
78-93
: LGTM: VAE decoder configuration is properly structured.
The VAE decode node is correctly configured with appropriate connections to the sampler output and model VAE.
32-40
: Verify checkpoint file availability and consider environment variations.
The checkpoint loader assumes the SDXL base model is available at a specific path. Consider:
- Adding error handling for missing model files
- Documenting the expected model location or making it configurable
onediff_comfy_nodes/workflows/sd15-baseline.json (2)
52-77
: Review prompt engineering for optimal results.
The current prompt configuration:
- Positive prompt combines multiple concepts: "beautiful scenery nature glass bottle landscape, purple galaxy bottle"
- Negative prompt is minimal: "text, watermark"
Consider:
- Separating the concepts for clearer intent
- Expanding the negative prompt to include common artifacts
- Testing different prompt combinations for optimal results
32-40
: Verify checkpoint availability and document requirements.
The workflow depends on v1-5-pruned-emaonly.safetensors
. Users need to ensure this specific model is available in their environment.
Consider:
- Documenting the model download instructions in the README
- Adding model file validation checks
- Making the checkpoint name configurable for flexibility
✅ Verification successful
Model download instructions are documented
The workflow's dependency on v1-5-pruned-emaonly.safetensors
is properly documented in onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md
, which includes the direct download command:
wget -O models/checkpoints/v1-5-pruned-emaonly.safetensors https://huggingface.co/sd-legacy/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify if documentation exists for model setup
rg -l "v1-5-pruned-emaonly.safetensors" "README*"
Length of output: 97
Script:
#!/bin/bash
# Let's check for any documentation files in the repository
fd -t f "README|INSTALL|SETUP|GUIDE" -i
# Also check for any model-related documentation or configuration
rg -l "checkpoint|model|safetensors" -g "!*.json"
# Check if there's any model validation or configuration logic in Python files
ast-grep --pattern 'def $_(self, $$$, checkpoint$_: $_)'
Length of output: 13495
Script:
#!/bin/bash
# Let's check specifically for SD1.5 model documentation and setup instructions
rg -l "v1-5-pruned-emaonly" onediff_comfy_nodes/docs
rg -l "v1-5-pruned-emaonly" onediff_comfy_nodes/README.md
# Check for model setup or validation logic in the nodes implementation
rg -l "CheckpointLoaderSimple" onediff_comfy_nodes/
# Look for any model validation or setup code
ast-grep --pattern 'class CheckpointLoaderSimple:
$$$'
Length of output: 565
Script:
#!/bin/bash
# Let's check the content of relevant documentation files that mention the model
rg -A 5 "v1-5-pruned-emaonly" onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md onediff_comfy_nodes/docs/ControlNet/README.md
# Check how CheckpointLoaderSimple is implemented
rg -A 10 "class CheckpointLoaderSimple" onediff_comfy_nodes/_nodes.py onediff_comfy_nodes/modules/oneflow/utils/loader_sample_tools.py
# Look for model setup instructions
rg -A 5 "model.*setup|checkpoint.*setup|model.*download" onediff_comfy_nodes/README.md onediff_comfy_nodes/docs/
Length of output: 2625
onediff_comfy_nodes/workflows/sdxl-onediff.json (2)
36-36
: Consider memory usage with batch size 4.
A batch size of 4 with SDXL model might require significant VRAM. Consider reducing it if users experience memory issues.
1-108
: Verify workflow node connections.
The workflow structure appears correct with proper node connections. Let's verify if this matches other SDXL workflows in the repository.
✅ Verification successful
Let me try a different approach to verify the workflow connections.
This script will help us verify:
- Which nodes are connected to the checkpoint loader
- What type of connections exist
- If the connections follow the expected pattern for SDXL workflows
Let me try one final verification approach focusing on the specific workflow file.
This script will show us all node connections in the specific workflow file, helping us verify if the connections are correct for an SDXL pipeline.
Workflow connections are correctly structured
The node connections follow the expected SDXL pipeline pattern:
- Checkpoint loader (10) provides model, clip, and vae to appropriate nodes
- KSampler (3) correctly receives all required inputs: model, positive/negative prompts, and latent image
- VAEDecode (8) properly connects to sampler output and VAE
- SaveImage (9) is correctly connected to the decoded output
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find and compare with other SDXL workflow files
echo "Searching for other SDXL workflow files..."
fd -e json | rg "sdxl" | xargs jq -r 'keys[] as $k | select(.[$k].class_type == "OneDiffCheckpointLoaderSimple") | "File: \(.[$k].inputs.ckpt_name)"'
Length of output: 244
Script:
#!/bin/bash
# Find all workflow files and analyze node connections
echo "Analyzing workflow node connections..."
fd -e json | xargs jq -r '
. as $workflow |
to_entries[] |
select(.value.class_type == "OneDiffCheckpointLoaderSimple") |
.key as $loader_key |
$workflow |
to_entries[] |
select(.value.inputs |
to_entries[] |
select(.value | type == "array" and .[0] == $loader_key)
) |
"\(.key): \(.value.class_type) -> \(.value.inputs | to_entries[] | select(.value | type == "array" and .[0] == $loader_key) | .key)"
'
Length of output: 523
Script:
#!/bin/bash
# Analyze the specific SDXL workflow file
echo "Analyzing SDXL workflow connections..."
jq -r '
. as $workflow |
to_entries[] |
.key as $node_key |
.value.inputs |
to_entries[] |
select(.value | type == "array") |
"\($node_key) [\($workflow[$node_key].class_type)] -> \(.value[0]) via \(.key)"
' onediff_comfy_nodes/workflows/sdxl-onediff.json
Length of output: 642
onediff_comfy_nodes/workflows/sd15-onediff.json (3)
69-84
: LGTM! VAE configuration is properly structured.
The node references and configuration are correct for VAE decoding.
98-107
: Verify model file availability and VAE speedup settings.
Please ensure:
- The checkpoint file "v1-5-pruned-emaonly.safetensors" is readily available or documented in installation instructions
- Document why VAE speedup is disabled by default
#!/bin/bash
# Check for model documentation
rg -g "*.md" "v1-5-pruned-emaonly.safetensors"
# Check for VAE speedup documentation
rg -g "*.md" "vae_speedup"
32-42
: Verify memory requirements for batch size.
A batch size of 4 with 512x512 images might require significant VRAM. Consider adding a comment or documentation note about the memory requirements.
onediff_comfy_nodes/workflows/sdxl-quant.json (3)
2-31
: LGTM! Generation parameters are well-configured.
The sampling configuration uses standard values that should produce good results:
- 20 steps is a good balance between quality and speed
- CFG of 8 provides reasonable guidance without being too restrictive
- Euler sampler is a reliable choice
32-42
: Consider adjusting image resolution for SDXL.
SDXL is optimized for higher resolutions:
- The current 512x512 resolution is below SDXL's sweet spot (1024x1024)
- A batch size of 4 at higher resolutions might exceed memory limits
Would you like me to help calculate the optimal resolution and batch size based on typical GPU memory constraints?
124-135
: Verify the quantization parameters.
The current configuration quantizes 100% of both convolution and linear layers, which might be too aggressive:
- Consider starting with a lower percentage (e.g., 80-90%) to balance performance and quality
- The compute density thresholds (100 for conv, 300 for linear) seem arbitrary. Have these values been validated?
✅ Verification successful
The quantization parameters are appropriate for optimal performance
The current configuration in the workflow aligns with the documented defaults and recommended settings:
- 100% quantization for both conv and linear layers is the default setting and has been validated through performance benchmarks
- The compute density thresholds (100 for conv, 300 for linear) match the documented default values that have been tested and shown to provide significant speedups:
- ~58.4% speedup for SDXL
- ~47.7% speedup for SD1.5
- ~37.25% speedup for SVD
The review comment's concern about these settings being "too aggressive" is incorrect based on the documentation and performance data.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other workflow files to compare quantization settings
rg -l "OneDiffOnlineQuantizationBooster" | while read -r file; do
echo "=== $file ==="
jq '.[] | select(.class_type == "OneDiffOnlineQuantizationBooster")' "$file"
done
Length of output: 397
Script:
#!/bin/bash
# Let's check the implementation of the quantization booster to understand the default/recommended values
cat onediff_comfy_nodes/extras_nodes/nodes_oneflow_booster.py
# Also search for any documentation or examples
fd -e md -e txt -e rst | xargs rg -l "quantiz" || true
Length of output: 26987
Script:
#!/bin/bash
# Let's check the documentation for recommended quantization settings
cat onediff_comfy_nodes/docs/OnlineQuantization.md onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md
Length of output: 11124
onediff_comfy_nodes/workflows/sd15-quant.json (2)
2-42
: Consider parameterizing the seed and adding memory requirements.
The sampling configuration looks good, but consider:
- The hardcoded seed might limit workflow flexibility
- A batch size of 4 at 512x512 resolution might require significant VRAM
Consider:
- Making the seed configurable through workflow inputs
- Adding a comment about minimum VRAM requirements for these parameters
124-135
: Review aggressive quantization settings.
The current configuration quantizes 100% of both convolutional and linear layers, which:
- Provides maximum memory savings
- Might impact model quality more than necessary
Consider:
- Starting with lower percentages (e.g., 80%) and gradually increasing based on quality/memory tradeoffs
- Documenting the expected memory savings and quality impact
Would you like help generating a version with more conservative quantization settings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (5)
onediff_comfy_nodes/workflows/sdxl-baseline.json (4)
2-31
: Consider using a simpler seed value for the baseline workflow.
The current seed value (911705451631265) appears to be from a specific test run. For a baseline workflow template, consider using a simpler seed value (e.g., 1234567890) to make it more readable and memorable.
52-77
: Clean up the positive prompt formatting.
The positive prompt contains redundant commas and spaces. Consider reformatting for clarity:
- "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+ "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle",
94-106
: Use a more specific filename prefix.
The current prefix "ComfyUI" is too generic. Consider using a more descriptive prefix that indicates the model and workflow type:
"inputs": {
- "filename_prefix": "ComfyUI",
+ "filename_prefix": "sdxl_baseline",
"images": [
1-107
: Add workflow documentation.
Consider adding a README.md or updating the existing documentation to include:
- Expected VRAM requirements
- Typical generation time
- Example outputs
- Recommended parameter ranges for different use cases
onediff_comfy_nodes/workflows/sdxl-onediff.json (1)
85-97
: Use more descriptive filename prefix.
The current generic prefix "ComfyUI" doesn't indicate the model or configuration used. Consider a more informative prefix.
- "filename_prefix": "ComfyUI",
+ "filename_prefix": "SDXL_galaxy_bottle",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
- onediff_comfy_nodes/workflows/sd15-baseline.json (1 hunks)
- onediff_comfy_nodes/workflows/sd15-onediff.json (1 hunks)
- onediff_comfy_nodes/workflows/sd15-quant.json (1 hunks)
- onediff_comfy_nodes/workflows/sdxl-baseline.json (1 hunks)
- onediff_comfy_nodes/workflows/sdxl-onediff.json (1 hunks)
- onediff_comfy_nodes/workflows/sdxl-quant.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- onediff_comfy_nodes/workflows/sd15-baseline.json
- onediff_comfy_nodes/workflows/sdxl-quant.json
🔇 Additional comments (7)
onediff_comfy_nodes/workflows/sdxl-baseline.json (1)
32-40
: Document the required model file in the repository.
The workflow requires the SDXL base model file sd_xl_base_1.0.safetensors
. Consider adding documentation about:
- Where to obtain this model file
- Where to place it in the project structure
- Any licensing considerations
onediff_comfy_nodes/workflows/sdxl-onediff.json (2)
1-108
: Overall workflow structure looks good.
The node connections and component hierarchy follow the standard SDXL generation pipeline correctly.
32-42
: Consider adjusting batch size based on available VRAM.
A batch size of 4 with SDXL (especially at higher resolutions) might require significant VRAM. Ensure your target environment has sufficient memory.
onediff_comfy_nodes/workflows/sd15-onediff.json (3)
4-4
: Seed parameterization needed
The previous review comment about parameterizing the seed value is still valid.
69-84
: VAEDecode configuration looks good
The node connections are properly configured for decoding the latent samples.
100-100
: Verify checkpoint file availability
Ensure that "v1-5-pruned-emaonly.safetensors" is available in the expected location.
onediff_comfy_nodes/workflows/sd15-quant.json (1)
124-135
: Review quantization parameters for optimal performance-quality trade-off.
Current settings might be too aggressive:
- 100% quantization for both conv and linear layers could impact model quality
- High compute density thresholds (especially 300 for linear) might exclude important operations
✅ Verification successful
Current quantization settings are within recommended ranges
Based on the documentation and implementation details found:
- The current settings (100% for both conv and linear layers) are within the valid range [0, 100]
- The compute density thresholds (100 for conv, 300 for linear) align with the default values documented in
OnlineQuantization.md
- These values represent a balanced approach:
- Lower compute density thresholds (compared to offline quantization which uses 900/300) ensure more operations are quantized
- Linear threshold at 300 is the standard default across both online and offline quantization
- Conv threshold at 100 allows for broader quantization coverage while maintaining stability
The review comment's concerns about these values being too aggressive are not supported by the codebase's documentation and defaults.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other quantization workflows to compare parameters
rg -l "OneDiffOnlineQuantizationBooster" | while read -r file; do
echo "=== $file ==="
jq '.[] | select(.class_type == "OneDiffOnlineQuantizationBooster")' "$file" 2>/dev/null
done
Length of output: 340
Script:
#!/bin/bash
# Let's check the implementation of OneDiffOnlineQuantizationBooster to understand default parameters
ast-grep --pattern 'class OneDiffOnlineQuantizationBooster'
Length of output: 61
Script:
#!/bin/bash
# Search for any documentation or examples of quantization parameters
rg -i "quantiz.*parameters|quantiz.*config" -B2 -A2
Length of output: 20016
Script:
#!/bin/bash
# Look for tests that might reveal typical parameter values
fd "test.*\.py" --exec rg -l "OneDiffOnlineQuantizationBooster" {} \;
Length of output: 72
Script:
#!/bin/bash
# Check if there are any other workflow files with quantization settings
fd ".*workflow.*\.json" --exec jq '.[] | select(.inputs.quantized_conv_percentage != null)' {} \;
Length of output: 102
Script:
#!/bin/bash
# Let's check the implementation details in the booster quantization module
cat onediff_comfy_nodes/modules/oneflow/booster_quantization.py
Length of output: 8926
Script:
#!/bin/bash
# Look for any documentation about quantization parameters in markdown files
rg -t md -i "quantiz.*parameter|compute.*density|conv.*percentage|linear.*percentage" -B2 -A2
Length of output: 7877
add workflow png and json for sdxl and sd1.5
Summary by CodeRabbit