Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add workflow png and json #1127

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

add workflow png and json #1127

wants to merge 4 commits into from

Conversation

XuZhang99
Copy link
Collaborator

@XuZhang99 XuZhang99 commented Oct 29, 2024

add workflow png and json for sdxl and sd1.5

Summary by CodeRabbit

  • Documentation
    • Enhanced clarity and accuracy of the ComfyUI online quantization guide.
    • Updated section headers with hyperlinks for improved navigation.
    • Reformatted performance comparison tables for better readability.
    • Corrected links to workflow JSON files and updated image references.
    • Clarified installation instructions, specifying correct file paths and formats.
  • New Features
    • Introduced new JSON configuration files defining workflows for image processing and model quantization, including nodes for sampling, encoding, decoding, and saving images.
    • Added nodes for model boosters and online quantization, enhancing workflow capabilities.

Copy link
Contributor

coderabbitai bot commented Oct 29, 2024

Walkthrough

The document ComfyUI_Online_Quantization.md has been revised to improve clarity and accuracy regarding online quantization for ComfyUI. Changes include updated section headers with hyperlinks, restructured performance comparison tables for better readability, corrected links to workflow JSON files, and updated image references. Additionally, installation instructions have been clarified, particularly regarding model file formats. The parameter descriptions section remains unchanged. These modifications enhance usability and ensure users have accurate resources and instructions.

Changes

File Path Change Summary
onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md Updated section headers with hyperlinks, reformatted performance comparison tables, corrected links to workflow JSON files, updated image references, and clarified installation instructions regarding model file formats.
onediff_comfy_nodes/workflows/sd15-baseline.json Introduced a new JSON configuration file defining nodes for image sampling, checkpoint loading, and saving images.
onediff_comfy_nodes/workflows/sd15-onediff.json Introduced a new JSON configuration file defining nodes for image processing with specific inputs and metadata.
onediff_comfy_nodes/workflows/sd15-quant.json Introduced a new JSON configuration file defining nodes for image processing and model quantization tasks.
onediff_comfy_nodes/workflows/sdxl-baseline.json Introduced a new JSON configuration file defining nodes for image processing with specific inputs and outputs.
onediff_comfy_nodes/workflows/sdxl-onediff.json Introduced a new JSON configuration file defining nodes for image processing with specific inputs and metadata.
onediff_comfy_nodes/workflows/sdxl-quant.json Introduced a new JSON configuration file defining nodes for image processing and model quantization tasks.

Poem

🐰 In the land of ComfyUI,
Where quantization flies high,
Links now guide with ease and grace,
Performance tables find their place.
With clearer paths, we hop along,
To model files where we belong! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (1)
onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md (1)

77-77: Consider enhancing the documentation with example outputs.

To help users better understand the impact of quantization on image quality, consider:

  1. Adding example output images for each optimization level (baseline, OneDiff, OneDiff Quant)
  2. Consistently formatting performance improvement percentages (some use parentheses, others use %)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 9231f55 and 771e232.

📒 Files selected for processing (1)
  • onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md (4 hunks)
🔇 Additional comments (1)
onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md (1)

91-93: Verify the existence of referenced SD1.5 workflow files.

Let's ensure the referenced workflow files exist at the specified paths.

#!/bin/bash
# Check if the workflow files exist
echo "Checking SD1.5 workflow files..."
for file in workflows/sd15-{baseline,onediff,quant}.json comfyui_workflow_images/sd15-{baseline,onediff,quant}.png; do
  if [ -f "../$file" ]; then
    echo "✓ Found: $file"
  else
    echo "✗ Missing: $file"
  fi
done

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Outside diff range and nitpick comments (11)
onediff_comfy_nodes/workflows/sdxl-baseline.json (4)

2-31: Consider using a more generic seed value and adding documentation.

The KSampler configuration is functionally correct, but consider:

  1. Using a simpler seed value (e.g., 42) for better reproducibility in examples
  2. Adding comments to document the chosen sampler settings (euler/normal) and their implications

41-51: Review resolution and batch size settings for SDXL.

The current configuration might not be optimal for SDXL:

  1. SDXL is designed for 1024x1024 resolution, current 512x512 might not showcase its full capability
  2. Batch size of 4 might be too demanding for some GPUs

Consider adjusting the configuration based on the target hardware requirements and SDXL's optimal settings.


52-77: Clean up prompt formatting and consider using generic examples.

The CLIP text encode configuration has minor issues:

  1. The positive prompt contains consecutive commas which might affect consistency
  2. Consider using simpler, more generic prompts for a baseline example
-      "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+      "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle"

94-106: Consider more specific filename prefix and output path configuration.

The current save configuration might lead to file conflicts:

  1. Generic "ComfyUI" prefix might cause overwrites in repeated runs
  2. Consider adding model name (SDXL) to the prefix for better organization
-      "filename_prefix": "ComfyUI",
+      "filename_prefix": "ComfyUI_SDXL",
onediff_comfy_nodes/workflows/sd15-baseline.json (2)

2-31: Document sampling configuration rationale.

The KSampler configuration uses:

  • Euler sampler with normal scheduler
  • 20 steps, CFG=8
  • Fixed seed: 1004586347945964

Consider:

  1. Documenting why these specific parameters were chosen
  2. Adding performance benchmarks for this configuration
  3. Making the seed configurable or random for production use

1-107: Workflow structure is sound but needs more flexibility.

The workflow successfully implements a basic SD1.5 pipeline with all necessary components. However, it could benefit from:

  1. Configuration externalization
  2. Documentation of node relationships and data flow
  3. Error handling for missing models or invalid inputs

Consider creating a companion configuration file to make the workflow more maintainable and reusable across different environments.

onediff_comfy_nodes/workflows/sd15-onediff.json (2)

43-68: Consider using more generic example prompts.

The current positive prompt is very specific to a particular scene. For a workflow template, consider using a more generic example that demonstrates the prompt structure.

-      "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+      "text": "a beautiful landscape, high quality, detailed",

85-97: Consider a more specific filename prefix.

Using "ComfyUI" as a prefix might lead to file conflicts when running multiple workflows. Consider using a more specific prefix like "sd15_onediff".

-      "filename_prefix": "ComfyUI",
+      "filename_prefix": "sd15_onediff",
onediff_comfy_nodes/workflows/sdxl-quant.json (2)

43-68: Clean up prompt formatting.

The positive prompt contains redundant commas which might affect prompt parsing:

-      "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+      "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle"

85-97: Use a more descriptive filename prefix.

The current generic prefix "ComfyUI" might lead to file overwrites. Consider including model and configuration details:

-      "filename_prefix": "ComfyUI",
+      "filename_prefix": "sdxl_quant",
onediff_comfy_nodes/workflows/sd15-quant.json (1)

69-97: Consider a more descriptive filename prefix.

The image processing pipeline is correctly configured, but the generic "ComfyUI" prefix might make it difficult to distinguish between different workflow outputs.

Consider using a more descriptive prefix like "sd15_quant" to indicate the model and quantization status.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 771e232 and 1303b96.

⛔ Files ignored due to path filters (6)
  • onediff_comfy_nodes/comfyui_workflow_images/sd15-baseline.png is excluded by !**/*.png
  • onediff_comfy_nodes/comfyui_workflow_images/sd15-onediff.png is excluded by !**/*.png
  • onediff_comfy_nodes/comfyui_workflow_images/sd15-quant.png is excluded by !**/*.png
  • onediff_comfy_nodes/comfyui_workflow_images/sdxl-baseline.png is excluded by !**/*.png
  • onediff_comfy_nodes/comfyui_workflow_images/sdxl-onediff.png is excluded by !**/*.png
  • onediff_comfy_nodes/comfyui_workflow_images/sdxl-quant.png is excluded by !**/*.png
📒 Files selected for processing (6)
  • onediff_comfy_nodes/workflows/sd15-baseline.json (1 hunks)
  • onediff_comfy_nodes/workflows/sd15-onediff.json (1 hunks)
  • onediff_comfy_nodes/workflows/sd15-quant.json (1 hunks)
  • onediff_comfy_nodes/workflows/sdxl-baseline.json (1 hunks)
  • onediff_comfy_nodes/workflows/sdxl-onediff.json (1 hunks)
  • onediff_comfy_nodes/workflows/sdxl-quant.json (1 hunks)
🔇 Additional comments (14)
onediff_comfy_nodes/workflows/sdxl-baseline.json (2)

78-93: LGTM: VAE decoder configuration is properly structured.

The VAE decode node is correctly configured with appropriate connections to the sampler output and model VAE.


32-40: Verify checkpoint file availability and consider environment variations.

The checkpoint loader assumes the SDXL base model is available at a specific path. Consider:

  1. Adding error handling for missing model files
  2. Documenting the expected model location or making it configurable
onediff_comfy_nodes/workflows/sd15-baseline.json (2)

52-77: Review prompt engineering for optimal results.

The current prompt configuration:

  1. Positive prompt combines multiple concepts: "beautiful scenery nature glass bottle landscape, purple galaxy bottle"
  2. Negative prompt is minimal: "text, watermark"

Consider:

  1. Separating the concepts for clearer intent
  2. Expanding the negative prompt to include common artifacts
  3. Testing different prompt combinations for optimal results

32-40: Verify checkpoint availability and document requirements.

The workflow depends on v1-5-pruned-emaonly.safetensors. Users need to ensure this specific model is available in their environment.

Consider:

  1. Documenting the model download instructions in the README
  2. Adding model file validation checks
  3. Making the checkpoint name configurable for flexibility
✅ Verification successful

Model download instructions are documented

The workflow's dependency on v1-5-pruned-emaonly.safetensors is properly documented in onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md, which includes the direct download command:

wget -O models/checkpoints/v1-5-pruned-emaonly.safetensors https://huggingface.co/sd-legacy/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify if documentation exists for model setup
rg -l "v1-5-pruned-emaonly.safetensors" "README*"

Length of output: 97


Script:

#!/bin/bash
# Let's check for any documentation files in the repository
fd -t f "README|INSTALL|SETUP|GUIDE" -i

# Also check for any model-related documentation or configuration
rg -l "checkpoint|model|safetensors" -g "!*.json"

# Check if there's any model validation or configuration logic in Python files
ast-grep --pattern 'def $_(self, $$$, checkpoint$_: $_)'

Length of output: 13495


Script:

#!/bin/bash
# Let's check specifically for SD1.5 model documentation and setup instructions
rg -l "v1-5-pruned-emaonly" onediff_comfy_nodes/docs
rg -l "v1-5-pruned-emaonly" onediff_comfy_nodes/README.md

# Check for model setup or validation logic in the nodes implementation
rg -l "CheckpointLoaderSimple" onediff_comfy_nodes/

# Look for any model validation or setup code
ast-grep --pattern 'class CheckpointLoaderSimple:
  $$$'

Length of output: 565


Script:

#!/bin/bash
# Let's check the content of relevant documentation files that mention the model
rg -A 5 "v1-5-pruned-emaonly" onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md onediff_comfy_nodes/docs/ControlNet/README.md

# Check how CheckpointLoaderSimple is implemented
rg -A 10 "class CheckpointLoaderSimple" onediff_comfy_nodes/_nodes.py onediff_comfy_nodes/modules/oneflow/utils/loader_sample_tools.py

# Look for model setup instructions
rg -A 5 "model.*setup|checkpoint.*setup|model.*download" onediff_comfy_nodes/README.md onediff_comfy_nodes/docs/

Length of output: 2625

onediff_comfy_nodes/workflows/sdxl-onediff.json (2)

36-36: Consider memory usage with batch size 4.

A batch size of 4 with SDXL model might require significant VRAM. Consider reducing it if users experience memory issues.


1-108: Verify workflow node connections.

The workflow structure appears correct with proper node connections. Let's verify if this matches other SDXL workflows in the repository.

✅ Verification successful

Let me try a different approach to verify the workflow connections.

This script will help us verify:

  1. Which nodes are connected to the checkpoint loader
  2. What type of connections exist
  3. If the connections follow the expected pattern for SDXL workflows

Let me try one final verification approach focusing on the specific workflow file.

This script will show us all node connections in the specific workflow file, helping us verify if the connections are correct for an SDXL pipeline.


Workflow connections are correctly structured

The node connections follow the expected SDXL pipeline pattern:

  • Checkpoint loader (10) provides model, clip, and vae to appropriate nodes
  • KSampler (3) correctly receives all required inputs: model, positive/negative prompts, and latent image
  • VAEDecode (8) properly connects to sampler output and VAE
  • SaveImage (9) is correctly connected to the decoded output
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find and compare with other SDXL workflow files
echo "Searching for other SDXL workflow files..."
fd -e json | rg "sdxl" | xargs jq -r 'keys[] as $k | select(.[$k].class_type == "OneDiffCheckpointLoaderSimple") | "File: \(.[$k].inputs.ckpt_name)"'

Length of output: 244


Script:

#!/bin/bash
# Find all workflow files and analyze node connections
echo "Analyzing workflow node connections..."
fd -e json | xargs jq -r '
  . as $workflow |
  to_entries[] |
  select(.value.class_type == "OneDiffCheckpointLoaderSimple") |
  .key as $loader_key |
  $workflow | 
  to_entries[] | 
  select(.value.inputs | 
    to_entries[] | 
    select(.value | type == "array" and .[0] == $loader_key)
  ) |
  "\(.key): \(.value.class_type) -> \(.value.inputs | to_entries[] | select(.value | type == "array" and .[0] == $loader_key) | .key)"
'

Length of output: 523


Script:

#!/bin/bash
# Analyze the specific SDXL workflow file
echo "Analyzing SDXL workflow connections..."
jq -r '
  . as $workflow |
  to_entries[] |
  .key as $node_key |
  .value.inputs | 
  to_entries[] |
  select(.value | type == "array") |
  "\($node_key) [\($workflow[$node_key].class_type)] -> \(.value[0]) via \(.key)"
' onediff_comfy_nodes/workflows/sdxl-onediff.json

Length of output: 642

onediff_comfy_nodes/workflows/sd15-onediff.json (3)

69-84: LGTM! VAE configuration is properly structured.

The node references and configuration are correct for VAE decoding.


98-107: Verify model file availability and VAE speedup settings.

Please ensure:

  1. The checkpoint file "v1-5-pruned-emaonly.safetensors" is readily available or documented in installation instructions
  2. Document why VAE speedup is disabled by default
#!/bin/bash
# Check for model documentation
rg -g "*.md" "v1-5-pruned-emaonly.safetensors"

# Check for VAE speedup documentation
rg -g "*.md" "vae_speedup"

32-42: Verify memory requirements for batch size.

A batch size of 4 with 512x512 images might require significant VRAM. Consider adding a comment or documentation note about the memory requirements.

onediff_comfy_nodes/workflows/sdxl-quant.json (3)

2-31: LGTM! Generation parameters are well-configured.

The sampling configuration uses standard values that should produce good results:

  • 20 steps is a good balance between quality and speed
  • CFG of 8 provides reasonable guidance without being too restrictive
  • Euler sampler is a reliable choice

32-42: Consider adjusting image resolution for SDXL.

SDXL is optimized for higher resolutions:

  1. The current 512x512 resolution is below SDXL's sweet spot (1024x1024)
  2. A batch size of 4 at higher resolutions might exceed memory limits

Would you like me to help calculate the optimal resolution and batch size based on typical GPU memory constraints?


124-135: Verify the quantization parameters.

The current configuration quantizes 100% of both convolution and linear layers, which might be too aggressive:

  1. Consider starting with a lower percentage (e.g., 80-90%) to balance performance and quality
  2. The compute density thresholds (100 for conv, 300 for linear) seem arbitrary. Have these values been validated?
✅ Verification successful

The quantization parameters are appropriate for optimal performance

The current configuration in the workflow aligns with the documented defaults and recommended settings:

  • 100% quantization for both conv and linear layers is the default setting and has been validated through performance benchmarks
  • The compute density thresholds (100 for conv, 300 for linear) match the documented default values that have been tested and shown to provide significant speedups:
    • ~58.4% speedup for SDXL
    • ~47.7% speedup for SD1.5
    • ~37.25% speedup for SVD

The review comment's concern about these settings being "too aggressive" is incorrect based on the documentation and performance data.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other workflow files to compare quantization settings
rg -l "OneDiffOnlineQuantizationBooster" | while read -r file; do
  echo "=== $file ==="
  jq '.[] | select(.class_type == "OneDiffOnlineQuantizationBooster")' "$file"
done

Length of output: 397


Script:

#!/bin/bash
# Let's check the implementation of the quantization booster to understand the default/recommended values
cat onediff_comfy_nodes/extras_nodes/nodes_oneflow_booster.py

# Also search for any documentation or examples
fd -e md -e txt -e rst | xargs rg -l "quantiz" || true

Length of output: 26987


Script:

#!/bin/bash
# Let's check the documentation for recommended quantization settings
cat onediff_comfy_nodes/docs/OnlineQuantization.md onediff_comfy_nodes/docs/ComfyUI_Online_Quantization.md

Length of output: 11124

onediff_comfy_nodes/workflows/sd15-quant.json (2)

2-42: Consider parameterizing the seed and adding memory requirements.

The sampling configuration looks good, but consider:

  1. The hardcoded seed might limit workflow flexibility
  2. A batch size of 4 at 512x512 resolution might require significant VRAM

Consider:

  • Making the seed configurable through workflow inputs
  • Adding a comment about minimum VRAM requirements for these parameters

124-135: Review aggressive quantization settings.

The current configuration quantizes 100% of both convolutional and linear layers, which:

  1. Provides maximum memory savings
  2. Might impact model quality more than necessary

Consider:

  • Starting with lower percentages (e.g., 80%) and gradually increasing based on quality/memory tradeoffs
  • Documenting the expected memory savings and quality impact

Would you like help generating a version with more conservative quantization settings?

@XuZhang99 XuZhang99 enabled auto-merge (squash) October 29, 2024 09:39
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Outside diff range and nitpick comments (5)
onediff_comfy_nodes/workflows/sdxl-baseline.json (4)

2-31: Consider using a simpler seed value for the baseline workflow.

The current seed value (911705451631265) appears to be from a specific test run. For a baseline workflow template, consider using a simpler seed value (e.g., 1234567890) to make it more readable and memorable.


52-77: Clean up the positive prompt formatting.

The positive prompt contains redundant commas and spaces. Consider reformatting for clarity:

-      "text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
+      "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle",

94-106: Use a more specific filename prefix.

The current prefix "ComfyUI" is too generic. Consider using a more descriptive prefix that indicates the model and workflow type:

   "inputs": {
-      "filename_prefix": "ComfyUI",
+      "filename_prefix": "sdxl_baseline",
       "images": [

1-107: Add workflow documentation.

Consider adding a README.md or updating the existing documentation to include:

  1. Expected VRAM requirements
  2. Typical generation time
  3. Example outputs
  4. Recommended parameter ranges for different use cases
onediff_comfy_nodes/workflows/sdxl-onediff.json (1)

85-97: Use more descriptive filename prefix.

The current generic prefix "ComfyUI" doesn't indicate the model or configuration used. Consider a more informative prefix.

-      "filename_prefix": "ComfyUI",
+      "filename_prefix": "SDXL_galaxy_bottle",
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 1303b96 and 4415f69.

📒 Files selected for processing (6)
  • onediff_comfy_nodes/workflows/sd15-baseline.json (1 hunks)
  • onediff_comfy_nodes/workflows/sd15-onediff.json (1 hunks)
  • onediff_comfy_nodes/workflows/sd15-quant.json (1 hunks)
  • onediff_comfy_nodes/workflows/sdxl-baseline.json (1 hunks)
  • onediff_comfy_nodes/workflows/sdxl-onediff.json (1 hunks)
  • onediff_comfy_nodes/workflows/sdxl-quant.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • onediff_comfy_nodes/workflows/sd15-baseline.json
  • onediff_comfy_nodes/workflows/sdxl-quant.json
🔇 Additional comments (7)
onediff_comfy_nodes/workflows/sdxl-baseline.json (1)

32-40: Document the required model file in the repository.

The workflow requires the SDXL base model file sd_xl_base_1.0.safetensors. Consider adding documentation about:

  1. Where to obtain this model file
  2. Where to place it in the project structure
  3. Any licensing considerations
onediff_comfy_nodes/workflows/sdxl-onediff.json (2)

1-108: Overall workflow structure looks good.

The node connections and component hierarchy follow the standard SDXL generation pipeline correctly.


32-42: Consider adjusting batch size based on available VRAM.

A batch size of 4 with SDXL (especially at higher resolutions) might require significant VRAM. Ensure your target environment has sufficient memory.

onediff_comfy_nodes/workflows/sd15-onediff.json (3)

4-4: Seed parameterization needed

The previous review comment about parameterizing the seed value is still valid.


69-84: VAEDecode configuration looks good

The node connections are properly configured for decoding the latent samples.


100-100: Verify checkpoint file availability

Ensure that "v1-5-pruned-emaonly.safetensors" is available in the expected location.

onediff_comfy_nodes/workflows/sd15-quant.json (1)

124-135: Review quantization parameters for optimal performance-quality trade-off.

Current settings might be too aggressive:

  1. 100% quantization for both conv and linear layers could impact model quality
  2. High compute density thresholds (especially 300 for linear) might exclude important operations
✅ Verification successful

Current quantization settings are within recommended ranges

Based on the documentation and implementation details found:

  • The current settings (100% for both conv and linear layers) are within the valid range [0, 100]
  • The compute density thresholds (100 for conv, 300 for linear) align with the default values documented in OnlineQuantization.md
  • These values represent a balanced approach:
    • Lower compute density thresholds (compared to offline quantization which uses 900/300) ensure more operations are quantized
    • Linear threshold at 300 is the standard default across both online and offline quantization
    • Conv threshold at 100 allows for broader quantization coverage while maintaining stability

The review comment's concerns about these values being too aggressive are not supported by the codebase's documentation and defaults.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other quantization workflows to compare parameters
rg -l "OneDiffOnlineQuantizationBooster" | while read -r file; do
  echo "=== $file ==="
  jq '.[] | select(.class_type == "OneDiffOnlineQuantizationBooster")' "$file" 2>/dev/null
done

Length of output: 340


Script:

#!/bin/bash
# Let's check the implementation of OneDiffOnlineQuantizationBooster to understand default parameters
ast-grep --pattern 'class OneDiffOnlineQuantizationBooster'

Length of output: 61


Script:

#!/bin/bash
# Search for any documentation or examples of quantization parameters
rg -i "quantiz.*parameters|quantiz.*config" -B2 -A2

Length of output: 20016


Script:

#!/bin/bash
# Look for tests that might reveal typical parameter values
fd "test.*\.py" --exec rg -l "OneDiffOnlineQuantizationBooster" {} \;

Length of output: 72


Script:

#!/bin/bash
# Check if there are any other workflow files with quantization settings
fd ".*workflow.*\.json" --exec jq '.[] | select(.inputs.quantized_conv_percentage != null)' {} \;

Length of output: 102


Script:

#!/bin/bash
# Let's check the implementation details in the booster quantization module
cat onediff_comfy_nodes/modules/oneflow/booster_quantization.py

Length of output: 8926


Script:

#!/bin/bash
# Look for any documentation about quantization parameters in markdown files
rg -t md -i "quantiz.*parameter|compute.*density|conv.*percentage|linear.*percentage" -B2 -A2

Length of output: 7877

onediff_comfy_nodes/workflows/sd15-quant.json Show resolved Hide resolved
onediff_comfy_nodes/workflows/sd15-quant.json Show resolved Hide resolved
onediff_comfy_nodes/workflows/sd15-quant.json Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants