2025-12-09
This commit is contained in:
@@ -7,13 +7,13 @@
|
||||
"id": "basedplayblast",
|
||||
"name": "BasedPlayblast",
|
||||
"tagline": "Easily create playblasts from Blender and Flamenco",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"type": "add-on",
|
||||
"maintainer": "RaincloudTheDragon <raincloudthedragon@gmail.com>",
|
||||
"license": [
|
||||
"GPL-3.0-or-later"
|
||||
],
|
||||
"blender_version_min": "5.0.0",
|
||||
"blender_version_min": "4.2.0",
|
||||
"website": "https://github.com/RaincloudTheDragon/BasedPlayblast",
|
||||
"permissions": {
|
||||
"files": "Import/export files and data"
|
||||
@@ -24,9 +24,34 @@
|
||||
"Workflow",
|
||||
"Video"
|
||||
],
|
||||
"archive_url": "https://github.com/RaincloudTheDragon/BasedPlayblast/releases/download/v2.3.1/BasedPlayblast.v2.3.1.zip",
|
||||
"archive_size": 38295,
|
||||
"archive_hash": "sha256:98f978a96fb8d15bae60987f305901ba0acd7a37ddb45627724326809e43622d"
|
||||
"archive_url": "https://github.com/RaincloudTheDragon/BasedPlayblast/releases/download/v2.4.0/BasedPlayblast.v2.4.0.zip",
|
||||
"archive_size": 40376,
|
||||
"archive_hash": "sha256:544369c72024681cb45a4ee073ae684b56f08f2e0d8d9906a75fcbb11e0a2196"
|
||||
},
|
||||
{
|
||||
"schema_version": "1.0.0",
|
||||
"id": "rainclouds_bulk_scene_tools",
|
||||
"name": "Raincloud's Bulk Scene Tools",
|
||||
"tagline": "Bulk utilities for optimizing scene data",
|
||||
"version": "0.11.0",
|
||||
"type": "add-on",
|
||||
"maintainer": "RaincloudTheDragon <raincloudthedragon@gmail.com>",
|
||||
"license": [
|
||||
"GPL-3.0-or-later"
|
||||
],
|
||||
"blender_version_min": "4.5.0",
|
||||
"website": "https://github.com/RaincloudTheDragon/Rainys-Bulk-Scene-Tools",
|
||||
"permissions": {
|
||||
"files": "Read and write external resources referenced by scenes"
|
||||
},
|
||||
"tags": [
|
||||
"Scene",
|
||||
"Workflow",
|
||||
"Materials"
|
||||
],
|
||||
"archive_url": "https://github.com/RaincloudTheDragon/Rainys-Bulk-Scene-Tools/releases/download/v0.11.0/Rainys_Bulk_Scene_Tools.v0.11.0.zip",
|
||||
"archive_size": 72969,
|
||||
"archive_hash": "sha256:827451b11808488e8682f4bdd4bfff8b6f1f0fe2aa5bbc53845fbf9a13e15757"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
{
|
||||
"version": "v1",
|
||||
"blocklist": [],
|
||||
"data": [
|
||||
{
|
||||
"schema_version": "1.0.0",
|
||||
"id": "basedplayblast",
|
||||
"name": "BasedPlayblast",
|
||||
"tagline": "Easily create playblasts from Blender and Flamenco",
|
||||
"version": "2.3.1",
|
||||
"type": "add-on",
|
||||
"maintainer": "RaincloudTheDragon <raincloudthedragon@gmail.com>",
|
||||
"license": [
|
||||
"GPL-3.0-or-later"
|
||||
],
|
||||
"blender_version_min": "5.0.0",
|
||||
"website": "https://github.com/RaincloudTheDragon/BasedPlayblast",
|
||||
"permissions": {
|
||||
"files": "Import/export files and data"
|
||||
},
|
||||
"tags": [
|
||||
"Animation",
|
||||
"Render",
|
||||
"Workflow",
|
||||
"Video"
|
||||
],
|
||||
"archive_url": "https://github.com/RaincloudTheDragon/BasedPlayblast/releases/download/v2.3.1/BasedPlayblast.v2.3.1.zip",
|
||||
"archive_size": 38295,
|
||||
"archive_hash": "sha256:98f978a96fb8d15bae60987f305901ba0acd7a37ddb45627724326809e43622d"
|
||||
},
|
||||
{
|
||||
"schema_version": "1.0.0",
|
||||
"id": "rainclouds_bulk_scene_tools",
|
||||
"name": "Raincloud's Bulk Scene Tools",
|
||||
"tagline": "Bulk utilities for optimizing scene data",
|
||||
"version": "0.11.0",
|
||||
"type": "add-on",
|
||||
"maintainer": "RaincloudTheDragon <raincloudthedragon@gmail.com>",
|
||||
"license": [
|
||||
"GPL-3.0-or-later"
|
||||
],
|
||||
"blender_version_min": "4.5.0",
|
||||
"website": "https://github.com/RaincloudTheDragon/Rainys-Bulk-Scene-Tools",
|
||||
"permissions": {
|
||||
"files": "Read and write external resources referenced by scenes"
|
||||
},
|
||||
"tags": [
|
||||
"Scene",
|
||||
"Workflow",
|
||||
"Materials"
|
||||
],
|
||||
"archive_url": "https://github.com/RaincloudTheDragon/Rainys-Bulk-Scene-Tools/releases/download/v0.11.0/Rainys_Bulk_Scene_Tools.v0.11.0.zip",
|
||||
"archive_size": 72969,
|
||||
"archive_hash": "sha256:827451b11808488e8682f4bdd4bfff8b6f1f0fe2aa5bbc53845fbf9a13e15757"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,59 @@
|
||||
# BasedPlayblast
|
||||
|
||||
**Easily create playblasts from Blender**
|
||||
|
||||
BasedPlayblast is a Blender addon that streamlines the process of creating video playblasts for animation review. It provides optimized render settings for fast preview generation while maintaining visual quality suitable for review purposes.
|
||||
|
||||
## Features
|
||||
|
||||
- **Fast Playblast Creation**: Optimized render settings for different preview modes (Solid, Material, Rendered)
|
||||
- **Multiple Display Modes**: Support for Wireframe, Solid, Material Preview, and Rendered modes
|
||||
- **Flexible Resolution**: Scene, preset, or custom resolution options
|
||||
- **Video Format Support**: MP4, MOV, AVI, MKV with various codecs (H.264, H.265, AV1, etc.)
|
||||
- **Metadata Integration**: Automatic inclusion of frame numbers, camera info, and custom notes
|
||||
- **Settings Management**: Apply and restore render settings without losing your project configuration
|
||||
- **Flamenco Support**: Custom Flamenco Job Script with a simple, non-destructive workflow
|
||||
|
||||
## Installation
|
||||
|
||||
### Via BlenderKit's Extension Repository (Recommended)
|
||||
1. Open Blender (5.0+)
|
||||
2. Install BlenderKit via https://www.blenderkit.com/get-blenderkit/
|
||||
3. Open Preferences (Ctrl + ,)
|
||||
4. Go to **Edit > Preferences > Get Extensions**
|
||||
5. Search for "BasedPlayblast"
|
||||
6. Click **Install**
|
||||
7. Enjoy automatic updating!
|
||||
|
||||
### Manual Installation
|
||||
1. Download the latest release, or the release that supports your intended Blender version
|
||||
2. In Blender, go to **Edit > Preferences > Add-ons**
|
||||
3. Click **Install from Disk** and select the downloaded file
|
||||
4. Enable the addon in the list
|
||||
|
||||
## Usage
|
||||
|
||||
1. **Locate the Panel**: Go to **Properties > Output > BasedPlayblast**
|
||||
2. **Configure Settings**: Set your output path, resolution, and display mode
|
||||
3. **Create Playblast**: Click the **PLAYBLAST** button
|
||||
4. **View Result**: Click **VIEW** to open the generated video
|
||||
|
||||
- **Apply Blast Settings**: Use this button to apply optimized render settings without rendering
|
||||
- Intended particularly for Flamenco. Apply, check the resultant render settings to ensure they're correct, then send to Flamenco using the BasedPlayblast custom Job type.
|
||||
- **Restore Original Settings**: Return to your original render configuration
|
||||
- **Display Modes**:
|
||||
- **Wireframe/Solid**
|
||||
- Fast workbench viewport rendering. Recommended for short and/or locally-blasted projects.
|
||||
- **Material**
|
||||
- **Rendered**
|
||||
|
||||
## Requirements
|
||||
|
||||
- Blender 5.0.0 or higher
|
||||
- Python 3.x (included with Blender)
|
||||
|
||||
## Support
|
||||
|
||||
- **Documentation**: [GitHub Repository](https://github.com/RaincloudTheDragon/BasedPlayblast)
|
||||
- **Issues**: Report bugs or request features on GitHub
|
||||
- **License**: GPL-3.0-or-later
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,28 @@
|
||||
schema_version = "1.0.0"
|
||||
|
||||
id = "basedplayblast"
|
||||
name = "BasedPlayblast"
|
||||
tagline = "Easily create playblasts from Blender and Flamenco"
|
||||
version = "2.4.0"
|
||||
type = "add-on"
|
||||
|
||||
maintainer = "RaincloudTheDragon <raincloudthedragon@gmail.com>"
|
||||
license = ["GPL-3.0-or-later"]
|
||||
blender_version_min = "4.2.0"
|
||||
|
||||
website = "https://github.com/RaincloudTheDragon/BasedPlayblast"
|
||||
|
||||
tags = ["Animation", "Render", "Workflow", "Video"]
|
||||
|
||||
[permissions]
|
||||
files = "Import/export files and data"
|
||||
|
||||
[build]
|
||||
paths_exclude_pattern = [
|
||||
"__pycache__/",
|
||||
"*.pyc",
|
||||
".git/",
|
||||
".github/",
|
||||
"addon_updater*",
|
||||
"basedplayblast_updater/"
|
||||
]
|
||||
@@ -0,0 +1,164 @@
|
||||
import bpy # type: ignore
|
||||
|
||||
RAINYS_EXTENSIONS_REPO_NAME = "Rainy's Extensions"
|
||||
RAINYS_EXTENSIONS_REPO_URL = (
|
||||
"https://raw.githubusercontent.com/RaincloudTheDragon/rainys-blender-extensions/refs/heads/main/index.json"
|
||||
)
|
||||
|
||||
_BOOTSTRAP_DONE = False
|
||||
|
||||
|
||||
def _log(message: str) -> None:
|
||||
print(f"RainysExtensionsCheck: {message}")
|
||||
|
||||
|
||||
def ensure_rainys_extensions_repo(_deferred: bool = False) -> None:
|
||||
"""
|
||||
Ensure the Rainy's Extensions repository is registered in Blender.
|
||||
|
||||
Safe to import and call from multiple add-ons; the helper guards against doing the
|
||||
work more than once per Blender session.
|
||||
"""
|
||||
global _BOOTSTRAP_DONE
|
||||
|
||||
if _BOOTSTRAP_DONE:
|
||||
return
|
||||
|
||||
_log("starting repository verification")
|
||||
|
||||
context_class_name = type(bpy.context).__name__
|
||||
if context_class_name == "_RestrictContext":
|
||||
if _deferred:
|
||||
_log("context still restricted after deferral; aborting repo check")
|
||||
return
|
||||
|
||||
_log("context restricted; scheduling repo check retry")
|
||||
|
||||
def _retry():
|
||||
ensure_rainys_extensions_repo(_deferred=True)
|
||||
return None
|
||||
|
||||
bpy.app.timers.register(_retry, first_interval=0.5)
|
||||
return
|
||||
|
||||
prefs = getattr(bpy.context, "preferences", None)
|
||||
if prefs is None:
|
||||
_log("no preferences available on context; skipping")
|
||||
return
|
||||
|
||||
preferences_changed = False
|
||||
addon_prefs = None
|
||||
addon_entry = None
|
||||
if hasattr(getattr(prefs, "addons", None), "get"):
|
||||
addon_entry = prefs.addons.get(__name__)
|
||||
elif hasattr(prefs, "addons"):
|
||||
try:
|
||||
addon_entry = prefs.addons[__name__]
|
||||
except Exception:
|
||||
addon_entry = None
|
||||
if addon_entry:
|
||||
addon_prefs = getattr(addon_entry, "preferences", None)
|
||||
addon_repo_initialized = bool(
|
||||
addon_prefs and getattr(addon_prefs, "repo_initialized", False)
|
||||
)
|
||||
|
||||
experimental = getattr(prefs, "experimental", None)
|
||||
if experimental and hasattr(experimental, "use_extension_platform"):
|
||||
if not experimental.use_extension_platform:
|
||||
experimental.use_extension_platform = True
|
||||
preferences_changed = True
|
||||
_log("enabled experimental extension platform")
|
||||
|
||||
repositories = None
|
||||
extensions_obj = getattr(prefs, "extensions", None)
|
||||
if extensions_obj:
|
||||
if hasattr(extensions_obj, "repos"):
|
||||
repositories = extensions_obj.repos
|
||||
elif hasattr(extensions_obj, "repositories"):
|
||||
repositories = extensions_obj.repositories
|
||||
|
||||
if repositories is None:
|
||||
filepaths = getattr(prefs, "filepaths", None)
|
||||
repositories = getattr(filepaths, "extension_repos", None) if filepaths else None
|
||||
|
||||
if repositories is None:
|
||||
_log("extension repositories collection missing; skipping")
|
||||
return
|
||||
|
||||
def _repo_matches(repo) -> bool:
|
||||
return getattr(repo, "remote_url", "") == RAINYS_EXTENSIONS_REPO_URL or getattr(
|
||||
repo, "url", ""
|
||||
) == RAINYS_EXTENSIONS_REPO_URL
|
||||
|
||||
matching_indices = [idx for idx, repo in enumerate(repositories) if _repo_matches(repo)]
|
||||
|
||||
target_repo = None
|
||||
if matching_indices:
|
||||
target_repo = repositories[matching_indices[0]]
|
||||
if len(matching_indices) > 1 and hasattr(repositories, "remove"):
|
||||
for dup_idx in reversed(matching_indices[1:]):
|
||||
try:
|
||||
repositories.remove(dup_idx)
|
||||
_log(f"removed duplicate repository entry at index {dup_idx}")
|
||||
except Exception as exc:
|
||||
_log(f"could not remove duplicate repository at index {dup_idx}: {exc}")
|
||||
else:
|
||||
target_repo = next(
|
||||
(
|
||||
repo
|
||||
for repo in repositories
|
||||
if getattr(repo, "name", "") == RAINYS_EXTENSIONS_REPO_NAME
|
||||
),
|
||||
None,
|
||||
)
|
||||
|
||||
if target_repo is None:
|
||||
_log("repo missing; creating new entry")
|
||||
if hasattr(repositories, "new"):
|
||||
target_repo = repositories.new()
|
||||
elif hasattr(repositories, "add"):
|
||||
target_repo = repositories.add()
|
||||
else:
|
||||
_log("repository collection does not support creation; aborting")
|
||||
return
|
||||
else:
|
||||
_log("repo entry already present; validating fields")
|
||||
|
||||
changed = preferences_changed
|
||||
|
||||
def _ensure_attr(obj, attr, value):
|
||||
if hasattr(obj, attr) and getattr(obj, attr) != value:
|
||||
setattr(obj, attr, value)
|
||||
return True
|
||||
if not hasattr(obj, attr):
|
||||
_log(f"repository entry missing attribute '{attr}', skipping field")
|
||||
return False
|
||||
|
||||
changed |= _ensure_attr(target_repo, "name", RAINYS_EXTENSIONS_REPO_NAME)
|
||||
changed |= _ensure_attr(target_repo, "module", "rainys_extensions")
|
||||
changed |= _ensure_attr(target_repo, "use_remote_url", True)
|
||||
changed |= _ensure_attr(target_repo, "remote_url", RAINYS_EXTENSIONS_REPO_URL)
|
||||
changed |= _ensure_attr(target_repo, "use_sync_on_startup", True)
|
||||
changed |= _ensure_attr(target_repo, "use_cache", True)
|
||||
changed |= _ensure_attr(target_repo, "use_access_token", False)
|
||||
|
||||
if addon_prefs and hasattr(addon_prefs, "repo_initialized") and not addon_prefs.repo_initialized:
|
||||
addon_prefs.repo_initialized = True
|
||||
changed = True
|
||||
|
||||
if not changed:
|
||||
_log("repository already configured; skipping preference save")
|
||||
_BOOTSTRAP_DONE = True
|
||||
return
|
||||
|
||||
if hasattr(bpy.ops, "wm") and hasattr(bpy.ops.wm, "save_userpref"):
|
||||
try:
|
||||
bpy.ops.wm.save_userpref()
|
||||
_log("preferences updated and saved")
|
||||
except Exception as exc: # pragma: no cover
|
||||
print(f"RainysExtensionsCheck: could not save preferences after repo update -> {exc}")
|
||||
else:
|
||||
_log("preferences API unavailable; changes not persisted")
|
||||
|
||||
_BOOTSTRAP_DONE = True
|
||||
|
||||
@@ -0,0 +1,133 @@
|
||||
import bpy # type: ignore
|
||||
from bpy.types import AddonPreferences, Panel # type: ignore
|
||||
from bpy.props import BoolProperty # type: ignore
|
||||
from .panels import bulk_viewport_display
|
||||
from .panels import bulk_data_remap
|
||||
from .panels import bulk_path_management
|
||||
from .panels import bulk_scene_general
|
||||
from .ops.AutoMatExtractor import AutoMatExtractor, AUTOMAT_OT_summary_dialog
|
||||
from .ops.Rename_images_by_mat import Rename_images_by_mat, RENAME_OT_summary_dialog
|
||||
from .ops.FreeGPU import BST_FreeGPU
|
||||
from .ops import ghost_buster
|
||||
|
||||
# Addon preferences class for update settings
|
||||
class BST_AddonPreferences(AddonPreferences):
|
||||
bl_idname = __package__
|
||||
|
||||
# AutoMat Extractor settings
|
||||
automat_common_outside_blend: BoolProperty(
|
||||
name="Place 'common' folder outside 'blend' folder",
|
||||
description="If enabled, the 'common' folder for shared textures will be placed directly in 'textures/'. If disabled, it will be placed inside 'textures/<blend_name>/'",
|
||||
default=False,
|
||||
)
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
# AutoMat Extractor settings
|
||||
box = layout.box()
|
||||
box.label(text="AutoMat Extractor Settings")
|
||||
row = box.row()
|
||||
row.prop(self, "automat_common_outside_blend")
|
||||
|
||||
# Main panel for Bulk Scene Tools
|
||||
class VIEW3D_PT_BulkSceneTools(Panel):
|
||||
"""Bulk Scene Tools Panel"""
|
||||
bl_label = "Bulk Scene Tools"
|
||||
bl_idname = "VIEW3D_PT_bulk_scene_tools"
|
||||
bl_space_type = 'VIEW_3D'
|
||||
bl_region_type = 'UI'
|
||||
bl_category = 'Edit'
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
layout.label(text="Tools for bulk operations on scene data")
|
||||
|
||||
# List of all classes in this module
|
||||
classes = (
|
||||
VIEW3D_PT_BulkSceneTools,
|
||||
BST_AddonPreferences,
|
||||
AutoMatExtractor,
|
||||
AUTOMAT_OT_summary_dialog,
|
||||
Rename_images_by_mat,
|
||||
RENAME_OT_summary_dialog,
|
||||
BST_FreeGPU,
|
||||
)
|
||||
|
||||
def register():
|
||||
# Register classes from this module (do this first to ensure preferences are available)
|
||||
for cls in classes:
|
||||
bpy.utils.register_class(cls)
|
||||
|
||||
# Print debug info about preferences
|
||||
try:
|
||||
prefs = bpy.context.preferences.addons.get(__package__)
|
||||
if prefs:
|
||||
print(f"Addon preferences registered successfully: {prefs}")
|
||||
else:
|
||||
print("WARNING: Addon preferences not found after registration!")
|
||||
print(f"Available addons: {', '.join(bpy.context.preferences.addons.keys())}")
|
||||
except Exception as e:
|
||||
print(f"Error accessing preferences: {str(e)}")
|
||||
|
||||
# Register modules
|
||||
bulk_scene_general.register()
|
||||
bulk_viewport_display.register()
|
||||
bulk_data_remap.register()
|
||||
bulk_path_management.register()
|
||||
ghost_buster.register()
|
||||
|
||||
# Add keybind for Free GPU (global context)
|
||||
wm = bpy.context.window_manager
|
||||
kc = wm.keyconfigs.addon
|
||||
if kc:
|
||||
# Use Screen keymap for global shortcuts that work everywhere
|
||||
km = kc.keymaps.new(name='Screen', space_type='EMPTY')
|
||||
kmi = km.keymap_items.new('bst.free_gpu', 'M', 'PRESS', ctrl=True, alt=True, shift=True)
|
||||
# Store keymap for cleanup
|
||||
addon_keymaps = getattr(bpy.types.Scene, '_bst_keymaps', [])
|
||||
addon_keymaps.append((km, kmi))
|
||||
bpy.types.Scene._bst_keymaps = addon_keymaps
|
||||
|
||||
def unregister():
|
||||
# Remove keybinds
|
||||
addon_keymaps = getattr(bpy.types.Scene, '_bst_keymaps', [])
|
||||
for km, kmi in addon_keymaps:
|
||||
try:
|
||||
km.keymap_items.remove(kmi)
|
||||
except:
|
||||
pass
|
||||
addon_keymaps.clear()
|
||||
if hasattr(bpy.types.Scene, '_bst_keymaps'):
|
||||
delattr(bpy.types.Scene, '_bst_keymaps')
|
||||
|
||||
# Unregister modules
|
||||
try:
|
||||
ghost_buster.unregister()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
bulk_path_management.unregister()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
bulk_data_remap.unregister()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
bulk_viewport_display.unregister()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
bulk_scene_general.unregister()
|
||||
except Exception:
|
||||
pass
|
||||
# Unregister classes from this module
|
||||
for cls in reversed(classes):
|
||||
try:
|
||||
bpy.utils.unregister_class(cls)
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
register()
|
||||
+29
@@ -0,0 +1,29 @@
|
||||
schema_version = "1.0.0"
|
||||
|
||||
id = "rainclouds_bulk_scene_tools"
|
||||
name = "Raincloud's Bulk Scene Tools"
|
||||
tagline = "Bulk utilities for optimizing scene data"
|
||||
version = "0.11.0"
|
||||
type = "add-on"
|
||||
|
||||
maintainer = "RaincloudTheDragon <raincloudthedragon@gmail.com>"
|
||||
license = ["GPL-3.0-or-later"]
|
||||
blender_version_min = "4.5.0"
|
||||
|
||||
website = "https://github.com/RaincloudTheDragon/Rainys-Bulk-Scene-Tools"
|
||||
|
||||
tags = ["Scene", "Workflow", "Materials"]
|
||||
|
||||
[permissions]
|
||||
files = "Read and write external resources referenced by scenes"
|
||||
|
||||
[build]
|
||||
paths_exclude_pattern = [
|
||||
"__pycache__/",
|
||||
"*.pyc",
|
||||
".git/",
|
||||
".github/",
|
||||
"docs/",
|
||||
"tests/",
|
||||
]
|
||||
|
||||
@@ -0,0 +1,147 @@
|
||||
# v 0.10.0
|
||||
- **AutoMat Extractor**
|
||||
- Added UDIM/tiled image detection so multi-tile textures are organized alongside standard images without errors. #8
|
||||
- Path builder now emits UDIM filename templates (e.g., `name.[UDIM].png`) plus per-tile targets (e.g., `name.1001.png`), preventing collisions during relocation.
|
||||
- Remapping helper sets tile-level `filepath` values and ensures directories exist before saving.
|
||||
- Saving routine attempts whole-image writes first, then falls back to per-tile saves via the Image Editor context, with summary logs noting UDIM sets processed.
|
||||
- **Viewport Colors**
|
||||
- Added a Refresh Material Previews button that clears thumbnails, assigns each material to a temporary preview mesh, and forces `preview_ensure()` so stubborn viewport colors now reliably pick up thumbnail data. #7
|
||||
|
||||
# v 0.9.1
|
||||
- **Convert Relations to Constraint**
|
||||
- Bugfix: Now converts bone parenting as intended
|
||||
|
||||
# v 0.9.0
|
||||
- **Convert Relations to Constraint**: Operator in Animation Data section that converts regular parenting relationships to Child Of constraints for selected objects, maintaining world position and transform hierarchy
|
||||
- Bugfix: Adapted old operator that wasn't drawing due to using the wrong icon string name.
|
||||
|
||||
# v 0.8.1
|
||||
- Delete Single Keyframe Actions: fixed bug caused by not ignoring linked files
|
||||
|
||||
# v 0.8.0
|
||||
|
||||
## New Features
|
||||
- **Delete Single Keyframe Actions**: New operator to remove unwanted animation actions (no keyframes, single keyframe, or all keyframes on same frame)
|
||||
- **Find Material Users**: New operator with native material selector interface that displays detailed material usage analysis in a popup dialog, showing:
|
||||
- Object users with material slots
|
||||
- Node tree references
|
||||
- Material node tree usage
|
||||
- Blender's internal user count and fake user status
|
||||
- **Remove Unused Material Slots**: New operator to clean up unused material slots from all mesh objects in the scene
|
||||
- **Enhanced Bulk Scene General Panel**: Reorganized panel with new sections:
|
||||
- Materials section containing material analysis and cleanup tools
|
||||
- Animation Data section for keyframe/action management
|
||||
- All new operators integrated with consistent UI and project formatting standards
|
||||
|
||||
## Fixes
|
||||
- PathMan
|
||||
- Automat summary no longer gives invoke error
|
||||
- Fixed timing/cancellation error when cancelling Rename Flat Colors operation
|
||||
- Pack files operator no longer throws AttributeError for is_generated (now uses img.source != 'GENERATED')
|
||||
- Pack files operator now properly skips special Blender images like "Render Result" and "Viewer Node" that can't be packed
|
||||
- General
|
||||
- Removed debug print statement that was showing "Subdivision Surface modifiers removed from all objects" on every addon load
|
||||
|
||||
# v 0.7.1
|
||||
|
||||
## Ghost Buster Enhancements
|
||||
|
||||
### Added
|
||||
- **Low Priority Ghost Detection**: New option to delete objects not in scenes with no legitimate use and users < 2
|
||||
- **Smart Instance Collection Detection**: Ghost Buster now properly detects when objects are used by instance collections in scenes
|
||||
- **Enhanced Legitimacy Checks**: Improved detection of objects with valid uses outside scenes (constraints, modifiers, particle systems only count if the using object is in a scene)
|
||||
|
||||
### Improved
|
||||
- **More Accurate Ghost Detection**: Eliminated false positives by checking if instance collection targets are actually being used by scene objects
|
||||
- **Better Classification**: Objects are now classified as "Legitimate", "Ghosts (users >= 2)", or "Low Priority (users < 2)" with clearer reasoning
|
||||
- Cleaned UI
|
||||
|
||||
### Technical Changes
|
||||
- Added `is_object_used_by_scene_instance_collections()` function for precise instance collection detection
|
||||
- Enhanced `is_object_legitimate_outside_scene()` with scene-aware checks for modifiers, constraints, and particle systems
|
||||
- Updated ghost analysis and removal logic to use more precise categorization
|
||||
- Added scene property `ghost_buster_delete_low_priority` for user preference storage
|
||||
|
||||
# v 0.7.0
|
||||
|
||||
## New: Ghost Detection System
|
||||
- **Universal Object Analysis**: Expanded ghost detection from CC-objects only to all object types (meshes, empties, curves, etc.)
|
||||
- **Enhanced Safety Framework**: Added comprehensive protection for legitimate objects outside scenes:
|
||||
- WGT rig widgets (`WGT-*` objects)
|
||||
- Modifier targets (curve modifiers, constraints)
|
||||
- Constraint targets and references
|
||||
- Particle system objects
|
||||
- Collection instance objects (linked collection references)
|
||||
- **Smart Classification**: Objects not in scenes now categorized as:
|
||||
- `LEGITIMATE`: Has valid use outside scenes (protected)
|
||||
- `LOW PRIORITY`: Only collection reference (preserved)
|
||||
- `GHOST`: Multiple users but not in scenes (removed)
|
||||
- **Conservative Cleanup Logic**: Only removes objects with 2+ users that have no legitimate purpose
|
||||
- **Updated UI**: Ghost Detector popup now shows "Ghost Objects Analysis" with enhanced categorization and object type details
|
||||
- **Improved Safety**: All linked/library content automatically protected from ghost detection
|
||||
|
||||
# v 0.6.1
|
||||
|
||||
## Bug Fixes
|
||||
- **Fixed flat color detection**: Redesigned algorithm with exact pixel matching and smart sampling
|
||||
- **Fixed AutoMat Extractor**: Now properly organizes images by material instead of dumping everything to common folder
|
||||
- **Fixed viewport color setting**: Resolved context restriction errors with deferred color application
|
||||
- **Fixed timer performance**: Reduced timer frequency and improved cancellation reliability
|
||||
- **Enhanced debugging**: Added comprehensive console reporting for all bulk operations
|
||||
|
||||
## Improvements
|
||||
- Better performance with optimized sampling
|
||||
- More reliable cancellation system
|
||||
- Context-safe operations that don't interfere with Blender's drawing state
|
||||
|
||||
# v 0.6.0
|
||||
|
||||
- **Enhancement: Progress Reporting & Cancellation**
|
||||
- Some of the PathMan's operators are pretty resource-intense. Due to Python's GIL, I haven't been able to figure out how to run some of these more efficiently. Without the console window, you're flying blind, so I've integrated a loading bar with progress reporting for the following operators:
|
||||
- Flat Color Texture Renamer
|
||||
- Remove Extensions
|
||||
- Save All to image Paths
|
||||
- Remap Selected
|
||||
- Rename by Material
|
||||
- AutoMat Extractor
|
||||
|
||||
# v 0.5.1
|
||||
|
||||
- **Enhanced AutoMat Extractor:**
|
||||
- Added a crucial safety check to prevent textures from overwriting each other if they resolve to the same filename (e.g., `Image.001.png` and `Image.002.png` both becoming `Image.png`).
|
||||
- The operator now correctly sanitizes names with numerical suffixes before saving.
|
||||
- A new summary dialog now appears after the operation, reporting how many files were extracted successfully and listing any files that were skipped due to naming conflicts.
|
||||
- Added a user preference to control the location of the `common` folder, allowing it to be placed either inside or outside the blend file's specific texture folder. A checkbox for this setting was added to the UI.
|
||||
- **Improved Suffix Handling:**
|
||||
- The "Rename by Material" tool now correctly preserves spaces in packed texture names (e.g., `Flow Pack` instead of `FlowPack`).
|
||||
- Added support for underscore-separated packed texture names (e.g., `flow_pack`).
|
||||
- **Bug Fixes:**
|
||||
- Resolved multiple `AttributeError` and `TypeError` exceptions that occurred due to incorrect addon name lookups and invalid icon names, making the UI and addon registration more robust.
|
||||
|
||||
# v 0.5.0
|
||||
|
||||
- **Integrated Scene General: Free GPU VRAM**
|
||||
- **Integrated PathMan: Automatic Material Extractor**
|
||||
- **Integrated PathMan: Rename Image Textures by Material**: Added comprehensive texture suffix recognition
|
||||
- Recognizes many Character Creator suffixes
|
||||
- Recognizes most standard material suffixes
|
||||
- Images with unrecognized suffixes are skipped instead of renamed, preventing unintended modifications
|
||||
- Enhanced logging: Unrecognized suffix images are listed separately for easy identification
|
||||
- **UI Improvements**:
|
||||
- Rearranged workflow layout: Make Paths Relative/Absolute moved to main workflow section
|
||||
- Remap Selected moved under path preview for better workflow progression
|
||||
- Rename by Material and AutoMat Extractor repositioned after Remap Selected
|
||||
- Added Autopack toggle at beginning of workflow sections (both Node Editor and 3D Viewport)
|
||||
- Consolidated draw functions: Node Editor panel now serves as master template for both panels
|
||||
|
||||
# v 0.4.1
|
||||
|
||||
- Fixed traceback error causing remap to fail to draw buttons
|
||||
|
||||
# v 0.4.0
|
||||
|
||||
Overhaul! Added new Scene General panel, major enhancements to all panels and functions.
|
||||
|
||||
# v0.3.0
|
||||
|
||||
- Added image path remapping for unpacked images, keeping them organized.
|
||||
+540
@@ -0,0 +1,540 @@
|
||||
import bpy
|
||||
import os
|
||||
import re
|
||||
from ..panels.bulk_path_management import (
|
||||
get_image_extension,
|
||||
bulk_remap_paths,
|
||||
set_image_paths,
|
||||
ensure_directory_for_path,
|
||||
)
|
||||
|
||||
class AUTOMAT_OT_summary_dialog(bpy.types.Operator):
|
||||
"""Show AutoMat Extractor operation summary"""
|
||||
bl_idname = "bst.automat_summary_dialog"
|
||||
bl_label = "AutoMat Extractor Summary"
|
||||
bl_options = {'REGISTER', 'INTERNAL'}
|
||||
|
||||
# Properties to store summary data
|
||||
total_selected: bpy.props.IntProperty(default=0)
|
||||
success_count: bpy.props.IntProperty(default=0)
|
||||
overwrite_skipped_count: bpy.props.IntProperty(default=0)
|
||||
failed_remap_count: bpy.props.IntProperty(default=0)
|
||||
|
||||
overwrite_details: bpy.props.StringProperty(default="")
|
||||
failed_remap_details: bpy.props.StringProperty(default="")
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
layout.label(text="AutoMat Extractor - Summary", icon='INFO')
|
||||
layout.separator()
|
||||
|
||||
box = layout.box()
|
||||
col = box.column(align=True)
|
||||
col.label(text=f"Total selected images: {self.total_selected}")
|
||||
col.label(text=f"Successfully extracted: {self.success_count}", icon='CHECKMARK')
|
||||
|
||||
if self.overwrite_skipped_count > 0:
|
||||
col.label(text=f"Skipped to prevent overwrite: {self.overwrite_skipped_count}", icon='ERROR')
|
||||
if self.failed_remap_count > 0:
|
||||
col.label(text=f"Failed to remap (path issue): {self.failed_remap_count}", icon='ERROR')
|
||||
|
||||
if self.overwrite_details:
|
||||
layout.separator()
|
||||
box = layout.box()
|
||||
box.label(text="Overwrite Conflicts (Skipped):", icon='FILE_TEXT')
|
||||
for line in self.overwrite_details.split('\n'):
|
||||
if line.strip():
|
||||
box.label(text=line)
|
||||
|
||||
if self.failed_remap_details:
|
||||
layout.separator()
|
||||
box = layout.box()
|
||||
box.label(text="Failed Remaps:", icon='FILE_TEXT')
|
||||
for line in self.failed_remap_details.split('\n'):
|
||||
if line.strip():
|
||||
box.label(text=line)
|
||||
|
||||
def execute(self, context):
|
||||
return {'FINISHED'}
|
||||
|
||||
def invoke(self, context, event):
|
||||
return context.window_manager.invoke_popup(self, width=500)
|
||||
|
||||
class AutoMatExtractor(bpy.types.Operator):
|
||||
bl_idname = "bst.automatextractor"
|
||||
bl_label = "AutoMatExtractor"
|
||||
bl_description = "Pack selected images and extract them with organized paths by blend file and material"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
# Get addon preferences
|
||||
addon_name = __package__.split('.')[0]
|
||||
prefs = context.preferences.addons.get(addon_name).preferences
|
||||
common_outside = prefs.automat_common_outside_blend
|
||||
|
||||
# Get selected images
|
||||
selected_images = [img for img in bpy.data.images if hasattr(img, "bst_selected") and img.bst_selected]
|
||||
|
||||
if not selected_images:
|
||||
self.report({'WARNING'}, "No images selected for extraction")
|
||||
return {'CANCELLED'}
|
||||
|
||||
# Set up progress tracking
|
||||
props = context.scene.bst_path_props
|
||||
props.is_operation_running = True
|
||||
props.operation_progress = 0.0
|
||||
props.operation_status = f"Preparing AutoMat extraction for {len(selected_images)} images..."
|
||||
|
||||
# Store data for timer processing
|
||||
self.selected_images = selected_images
|
||||
self.common_outside = common_outside
|
||||
self.current_step = 0
|
||||
self.current_index = 0
|
||||
self.packed_count = 0
|
||||
self.success_count = 0
|
||||
self.overwrite_skipped = []
|
||||
self.failed_list = []
|
||||
self.path_mapping = {}
|
||||
self.udim_summary = {
|
||||
"found": 0,
|
||||
"saved": 0,
|
||||
}
|
||||
|
||||
# Start timer for processing
|
||||
bpy.app.timers.register(self._process_step)
|
||||
|
||||
return {'FINISHED'}
|
||||
|
||||
def _process_step(self):
|
||||
"""Process AutoMat extraction in steps to avoid blocking the UI"""
|
||||
props = bpy.context.scene.bst_path_props
|
||||
|
||||
# Check for cancellation
|
||||
if props.cancel_operation:
|
||||
props.is_operation_running = False
|
||||
props.operation_progress = 0.0
|
||||
props.operation_status = "Operation cancelled"
|
||||
props.cancel_operation = False
|
||||
return None
|
||||
|
||||
if self.current_step == 0:
|
||||
# Step 1: Pack images
|
||||
if self.current_index >= len(self.selected_images):
|
||||
# Packing complete, move to next step
|
||||
self.current_step = 1
|
||||
self.current_index = 0
|
||||
props.operation_status = "Removing extensions from image names..."
|
||||
props.operation_progress = 25.0
|
||||
return 0.01
|
||||
|
||||
# Pack current image
|
||||
img = self.selected_images[self.current_index]
|
||||
props.operation_status = f"Packing {img.name}..."
|
||||
|
||||
if not img.packed_file:
|
||||
try:
|
||||
img.pack()
|
||||
self.packed_count += 1
|
||||
except Exception as e:
|
||||
# Continue even if packing fails
|
||||
pass
|
||||
|
||||
self.current_index += 1
|
||||
progress = (self.current_index / len(self.selected_images)) * 25.0
|
||||
props.operation_progress = progress
|
||||
|
||||
elif self.current_step == 1:
|
||||
# Step 2: Remove extensions (this is a quick operation)
|
||||
try:
|
||||
bpy.ops.bst.remove_extensions()
|
||||
except Exception as e:
|
||||
pass # Continue even if this fails
|
||||
|
||||
self.current_step = 2
|
||||
self.current_index = 0
|
||||
props.operation_status = "Analyzing material usage..."
|
||||
props.operation_progress = 30.0
|
||||
|
||||
elif self.current_step == 2:
|
||||
# Step 3: Organize images by material usage
|
||||
if self.current_index >= len(self.selected_images):
|
||||
# Analysis complete, move to path building
|
||||
self.current_step = 3
|
||||
self.current_index = 0
|
||||
props.operation_status = "Building path mapping..."
|
||||
props.operation_progress = 50.0
|
||||
return 0.01
|
||||
|
||||
# Get material mapping for all selected images
|
||||
if self.current_index == 0:
|
||||
self.material_mapping = self.get_image_material_mapping(self.selected_images)
|
||||
print(f"DEBUG: Material mapping created for {len(self.selected_images)} images")
|
||||
|
||||
# This step is quick, just mark progress
|
||||
self.current_index += 1
|
||||
progress = 30.0 + (self.current_index / len(self.selected_images)) * 20.0
|
||||
props.operation_progress = progress
|
||||
|
||||
elif self.current_step == 3:
|
||||
# Step 4: Build path mapping
|
||||
if self.current_index >= len(self.selected_images):
|
||||
# Path building complete, move to remapping
|
||||
self.current_step = 4
|
||||
self.current_index = 0
|
||||
props.operation_status = "Remapping image paths..."
|
||||
props.operation_progress = 70.0
|
||||
return 0.01
|
||||
|
||||
# Build path for current image
|
||||
img = self.selected_images[self.current_index]
|
||||
props.operation_status = f"Building path for {img.name}..."
|
||||
|
||||
# Get blend file name
|
||||
blend_name = bpy.path.basename(bpy.data.filepath)
|
||||
if blend_name:
|
||||
blend_name = os.path.splitext(blend_name)[0]
|
||||
else:
|
||||
blend_name = "untitled"
|
||||
blend_name = self.sanitize_filename(blend_name)
|
||||
|
||||
# Determine common path
|
||||
if self.common_outside:
|
||||
common_path_part = "common"
|
||||
else:
|
||||
common_path_part = f"{blend_name}\\common"
|
||||
|
||||
# Get extension and build path
|
||||
extension = get_image_extension(img)
|
||||
sanitized_base_name = self.sanitize_filename(img.name)
|
||||
filename = f"{sanitized_base_name}{extension}"
|
||||
|
||||
if img.name.startswith('#'):
|
||||
# Flat colors go to FlatColors subfolder
|
||||
base_folder = f"//textures\\{common_path_part}\\FlatColors"
|
||||
else:
|
||||
# Check material usage for this image
|
||||
materials_using_image = self.material_mapping.get(img.name, [])
|
||||
|
||||
if not materials_using_image:
|
||||
# No materials found, put in common folder
|
||||
base_folder = f"//textures\\{common_path_part}"
|
||||
print(f"DEBUG: {img.name} - No materials found, using common folder")
|
||||
elif len(materials_using_image) == 1:
|
||||
# Used by exactly one material, organize by material name
|
||||
material_name = self.sanitize_filename(materials_using_image[0])
|
||||
base_folder = f"//textures\\{blend_name}\\{material_name}"
|
||||
print(f"DEBUG: {img.name} - Used by {material_name}, organizing by material")
|
||||
else:
|
||||
# Used by multiple materials, put in common folder
|
||||
base_folder = f"//textures\\{common_path_part}"
|
||||
print(f"DEBUG: {img.name} - Used by multiple materials: {materials_using_image}, using common folder")
|
||||
|
||||
is_udim = self.is_udim_image(img)
|
||||
if is_udim:
|
||||
udim_mapping = self.build_udim_mapping(base_folder, sanitized_base_name, extension, img)
|
||||
self.path_mapping[img.name] = udim_mapping
|
||||
self.udim_summary["found"] += 1
|
||||
print(f"DEBUG: {img.name} - UDIM detected with {len(udim_mapping.get('tiles', {}))} tiles")
|
||||
else:
|
||||
path = f"{base_folder}\\{filename}"
|
||||
self.path_mapping[img.name] = path
|
||||
|
||||
self.current_index += 1
|
||||
progress = 50.0 + (self.current_index / len(self.selected_images)) * 20.0
|
||||
props.operation_progress = progress
|
||||
|
||||
elif self.current_step == 4:
|
||||
# Step 5: Remap paths
|
||||
if self.current_index >= len(self.path_mapping):
|
||||
# Remapping complete, move to saving
|
||||
self.current_step = 5
|
||||
self.current_index = 0
|
||||
props.operation_status = "Saving images to new locations..."
|
||||
props.operation_progress = 85.0
|
||||
return 0.01
|
||||
|
||||
# Remap current image
|
||||
img_name = list(self.path_mapping.keys())[self.current_index]
|
||||
mapping_entry = self.path_mapping[img_name]
|
||||
props.operation_status = f"Remapping {img_name}..."
|
||||
|
||||
if isinstance(mapping_entry, dict) and mapping_entry.get("udim"):
|
||||
success = set_image_paths(
|
||||
img_name,
|
||||
mapping_entry.get("template", ""),
|
||||
tile_paths=mapping_entry.get("tiles", {})
|
||||
)
|
||||
else:
|
||||
success = set_image_paths(img_name, mapping_entry)
|
||||
if success:
|
||||
self.success_count += 1
|
||||
else:
|
||||
self.failed_list.append(img_name)
|
||||
|
||||
self.current_index += 1
|
||||
progress = 70.0 + (self.current_index / len(self.path_mapping)) * 15.0
|
||||
props.operation_progress = progress
|
||||
|
||||
elif self.current_step == 5:
|
||||
# Step 6: Save images
|
||||
if self.current_index >= len(self.selected_images):
|
||||
# Operation complete
|
||||
props.is_operation_running = False
|
||||
props.operation_progress = 100.0
|
||||
props.operation_status = f"Completed! Extracted {self.success_count} images{f', {len(self.failed_list)} failed' if self.failed_list else ''}"
|
||||
|
||||
# Show summary dialog
|
||||
self.show_summary_dialog(
|
||||
bpy.context,
|
||||
total_selected=len(self.selected_images),
|
||||
success_count=self.success_count,
|
||||
overwrite_skipped_list=self.overwrite_skipped,
|
||||
failed_remap_list=self.failed_list
|
||||
)
|
||||
|
||||
# Console summary
|
||||
print(f"\n=== AUTOMAT EXTRACTION SUMMARY ===")
|
||||
print(f"Total images processed: {len(self.selected_images)}")
|
||||
print(f"Successfully extracted: {self.success_count}")
|
||||
print(f"Failed to remap: {len(self.failed_list)}")
|
||||
|
||||
# Show organization breakdown
|
||||
material_organized = 0
|
||||
common_organized = 0
|
||||
flat_colors = 0
|
||||
|
||||
for img_name, path in self.path_mapping.items():
|
||||
current_path = path["template"] if isinstance(path, dict) else path
|
||||
if "FlatColors" in current_path:
|
||||
flat_colors += 1
|
||||
elif "common" in current_path:
|
||||
common_organized += 1
|
||||
else:
|
||||
material_organized += 1
|
||||
|
||||
print(f"\nOrganization breakdown:")
|
||||
print(f" Material-specific folders: {material_organized}")
|
||||
print(f" Common folder: {common_organized}")
|
||||
print(f" Flat colors: {flat_colors}")
|
||||
|
||||
# Show material organization details
|
||||
if material_organized > 0:
|
||||
print(f"\nMaterial organization details:")
|
||||
material_folders = {}
|
||||
for img_name, path in self.path_mapping.items():
|
||||
if "FlatColors" not in path and "common" not in path:
|
||||
# Extract material name from path
|
||||
if isinstance(path, dict):
|
||||
continue
|
||||
path_parts = path.split('\\')
|
||||
if len(path_parts) >= 3:
|
||||
material_name = path_parts[-2]
|
||||
if material_name not in material_folders:
|
||||
material_folders[material_name] = []
|
||||
material_folders[material_name].append(img_name)
|
||||
|
||||
for material_name, images in material_folders.items():
|
||||
print(f" {material_name}: {len(images)} images")
|
||||
|
||||
print(f"=====================================\n")
|
||||
if self.udim_summary["found"]:
|
||||
print(f"UDIM images processed: {self.udim_summary['found']} (saved successfully: {self.udim_summary['saved']})")
|
||||
|
||||
# Force UI update
|
||||
for area in bpy.context.screen.areas:
|
||||
area.tag_redraw()
|
||||
|
||||
return None
|
||||
|
||||
# Save current image
|
||||
img = self.selected_images[self.current_index]
|
||||
props.operation_status = f"Saving {img.name}..."
|
||||
|
||||
mapping_entry = self.path_mapping.get(img.name)
|
||||
if isinstance(mapping_entry, dict) and mapping_entry.get("udim"):
|
||||
self.save_udim_image(img, mapping_entry)
|
||||
else:
|
||||
self.save_standard_image(img)
|
||||
|
||||
self.current_index += 1
|
||||
progress = 85.0 + (self.current_index / len(self.selected_images)) * 15.0
|
||||
props.operation_progress = progress
|
||||
|
||||
# Force UI update
|
||||
for area in bpy.context.screen.areas:
|
||||
area.tag_redraw()
|
||||
|
||||
# Continue processing
|
||||
return 0.01
|
||||
|
||||
def show_summary_dialog(self, context, total_selected, success_count, overwrite_skipped_list, failed_remap_list):
|
||||
"""Show a popup dialog with the extraction summary"""
|
||||
overwrite_details = ""
|
||||
if overwrite_skipped_list:
|
||||
for name, path in overwrite_skipped_list:
|
||||
overwrite_details += f"'{name}' -> '{path}'\n"
|
||||
|
||||
failed_remap_details = ""
|
||||
if failed_remap_list:
|
||||
for name, path in failed_remap_list:
|
||||
failed_remap_details += f"'{name}' -> '{path}'\n"
|
||||
|
||||
bpy.ops.bst.automat_summary_dialog('INVOKE_DEFAULT',
|
||||
total_selected=total_selected,
|
||||
success_count=success_count,
|
||||
overwrite_skipped_count=len(overwrite_skipped_list),
|
||||
failed_remap_count=len(failed_remap_list),
|
||||
overwrite_details=overwrite_details.strip(),
|
||||
failed_remap_details=failed_remap_details.strip()
|
||||
)
|
||||
|
||||
def sanitize_filename(self, filename):
|
||||
"""Sanitize filename/folder name for filesystem compatibility"""
|
||||
# First, remove potential file extensions, including numerical ones like .001
|
||||
base_name = re.sub(r'\.\d{3}$', '', filename) # Remove .001, .002 etc.
|
||||
base_name = os.path.splitext(base_name)[0] # Remove standard extensions
|
||||
|
||||
# Remove or replace invalid characters for Windows/Mac/Linux
|
||||
sanitized = re.sub(r'[<>:"/\\|?*]', '_', base_name)
|
||||
# Remove leading/trailing spaces and dots
|
||||
sanitized = sanitized.strip(' .')
|
||||
# Ensure it's not empty
|
||||
if not sanitized:
|
||||
sanitized = "unnamed"
|
||||
return sanitized
|
||||
|
||||
def get_image_material_mapping(self, images):
|
||||
"""Create mapping of image names to materials that use them"""
|
||||
image_to_materials = {}
|
||||
|
||||
# Initialize mapping
|
||||
for img in images:
|
||||
image_to_materials[img.name] = []
|
||||
|
||||
# Check all materials for image usage
|
||||
for material in bpy.data.materials:
|
||||
if not material.use_nodes:
|
||||
continue
|
||||
|
||||
material_images = set()
|
||||
|
||||
# Find all image texture nodes in this material
|
||||
for node in material.node_tree.nodes:
|
||||
if node.type == 'TEX_IMAGE' and node.image:
|
||||
material_images.add(node.image.name)
|
||||
|
||||
# Add this material to each image's usage list
|
||||
for img_name in material_images:
|
||||
if img_name in image_to_materials:
|
||||
image_to_materials[img_name].append(material.name)
|
||||
|
||||
return image_to_materials
|
||||
|
||||
def is_udim_image(self, image):
|
||||
"""Return True when the image contains UDIM/tiled data"""
|
||||
has_tiles = hasattr(image, "source") and image.source == 'TILED'
|
||||
tiles_attr = getattr(image, "tiles", None)
|
||||
if tiles_attr and len(tiles_attr) > 1:
|
||||
return True
|
||||
return has_tiles
|
||||
|
||||
def build_udim_mapping(self, base_folder, base_name, extension, image):
|
||||
"""Create a path mapping structure for UDIM images"""
|
||||
udim_token = "<UDIM>"
|
||||
template_filename = f"{base_name}.{udim_token}{extension}"
|
||||
template_path = f"{base_folder}\\{template_filename}"
|
||||
tile_paths = {}
|
||||
|
||||
tiles = getattr(image, "tiles", [])
|
||||
for tile in tiles:
|
||||
tile_number = str(getattr(tile, "number", "1001"))
|
||||
tile_filename = f"{base_name}.{tile_number}{extension}"
|
||||
tile_paths[tile_number] = f"{base_folder}\\{tile_filename}"
|
||||
|
||||
return {
|
||||
"udim": True,
|
||||
"template": template_path,
|
||||
"tiles": tile_paths,
|
||||
}
|
||||
|
||||
def save_udim_image(self, image, mapping):
|
||||
"""Attempt to save each tile for a UDIM image"""
|
||||
success = False
|
||||
try:
|
||||
image.save()
|
||||
success = True
|
||||
except Exception as e:
|
||||
print(f"DEBUG: UDIM bulk save failed for {image.name}: {e}")
|
||||
success = self._save_udim_tiles_individually(image, mapping)
|
||||
|
||||
if success:
|
||||
self.udim_summary["saved"] += 1
|
||||
return success
|
||||
|
||||
def save_standard_image(self, image):
|
||||
"""Save a non-UDIM image safely"""
|
||||
try:
|
||||
if hasattr(image, 'save'):
|
||||
image.save()
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"DEBUG: Failed to save image {image.name}: {e}")
|
||||
return False
|
||||
|
||||
def _save_udim_tiles_individually(self, image, mapping):
|
||||
"""Fallback saving routine when image.save() fails on UDIMs"""
|
||||
tile_paths = mapping.get("tiles", {})
|
||||
any_saved = False
|
||||
|
||||
for tile in getattr(image, "tiles", []):
|
||||
tile_number = str(getattr(tile, "number", "1001"))
|
||||
target_path = tile_paths.get(tile_number)
|
||||
if not target_path:
|
||||
continue
|
||||
try:
|
||||
ensure_directory_for_path(target_path)
|
||||
self._save_tile_via_image_editor(image, tile_number, target_path)
|
||||
any_saved = True
|
||||
except Exception as e:
|
||||
print(f"DEBUG: Failed to save UDIM tile {tile_number} for {image.name}: {e}")
|
||||
|
||||
return any_saved
|
||||
|
||||
def _save_tile_via_image_editor(self, image, tile_number, filepath):
|
||||
"""Use an IMAGE_EDITOR override to save a specific tile"""
|
||||
# Try to find an existing image editor to reuse Blender UI context
|
||||
for area in bpy.context.screen.areas:
|
||||
if area.type != 'IMAGE_EDITOR':
|
||||
continue
|
||||
override = bpy.context.copy()
|
||||
override['area'] = area
|
||||
override['space_data'] = area.spaces.active
|
||||
region = next((r for r in area.regions if r.type == 'WINDOW'), None)
|
||||
if region is None:
|
||||
continue
|
||||
override['region'] = region
|
||||
space = area.spaces.active
|
||||
space.image = image
|
||||
if hasattr(space, "image_user"):
|
||||
space.image_user.tile = int(tile_number)
|
||||
bpy.ops.image.save(override, filepath=filepath)
|
||||
return
|
||||
# Fallback: attempt to set filepath and invoke save without override
|
||||
image.filepath = filepath
|
||||
image.save()
|
||||
|
||||
# Must register the new dialog class as well
|
||||
classes = (
|
||||
AUTOMAT_OT_summary_dialog,
|
||||
AutoMatExtractor,
|
||||
)
|
||||
|
||||
def register():
|
||||
for cls in classes:
|
||||
bpy.utils.register_class(cls)
|
||||
|
||||
def unregister():
|
||||
for cls in reversed(classes):
|
||||
bpy.utils.unregister_class(cls)
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
import bpy
|
||||
|
||||
class BST_FreeGPU(bpy.types.Operator):
|
||||
bl_idname = "bst.free_gpu"
|
||||
bl_label = "Free VRAM"
|
||||
bl_description = "Unallocate all material images from VRAM"
|
||||
|
||||
def execute(self, context):
|
||||
for mat in bpy.data.materials:
|
||||
if mat.use_nodes:
|
||||
for node in mat.node_tree.nodes:
|
||||
if hasattr(node, 'image') and node.image:
|
||||
node.image.gl_free()
|
||||
return {"FINISHED"}
|
||||
@@ -0,0 +1,29 @@
|
||||
import bpy
|
||||
|
||||
class NoSubdiv(bpy.types.Operator):
|
||||
"""Remove all subdivision surface modifiers from objects"""
|
||||
bl_idname = "bst.no_subdiv"
|
||||
bl_label = "No Subdiv"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
only_selected: bpy.props.BoolProperty(
|
||||
name="Only Selected Objects",
|
||||
description="Apply only to selected objects",
|
||||
default=True
|
||||
)
|
||||
|
||||
def execute(self, context):
|
||||
# Choose objects based on the property
|
||||
if self.only_selected:
|
||||
objects = context.selected_objects
|
||||
else:
|
||||
objects = bpy.data.objects
|
||||
removed_count = 0
|
||||
for obj in objects:
|
||||
if obj.modifiers:
|
||||
subdiv_mods = [mod for mod in obj.modifiers if mod.type == 'SUBSURF']
|
||||
for mod in subdiv_mods:
|
||||
obj.modifiers.remove(mod)
|
||||
removed_count += 1
|
||||
self.report({'INFO'}, f"Subdivision Surface modifiers removed from {'selected' if self.only_selected else 'all'} objects. ({removed_count} removed)")
|
||||
return {'FINISHED'}
|
||||
+513
@@ -0,0 +1,513 @@
|
||||
import bpy
|
||||
import re
|
||||
|
||||
class RENAME_OT_summary_dialog(bpy.types.Operator):
|
||||
"""Show rename operation summary"""
|
||||
bl_idname = "bst.rename_summary_dialog"
|
||||
bl_label = "Rename Summary"
|
||||
bl_options = {'REGISTER', 'INTERNAL'}
|
||||
|
||||
# Properties to store summary data
|
||||
total_selected: bpy.props.IntProperty(default=0)
|
||||
renamed_count: bpy.props.IntProperty(default=0)
|
||||
shared_count: bpy.props.IntProperty(default=0)
|
||||
unused_count: bpy.props.IntProperty(default=0)
|
||||
cc3iid_count: bpy.props.IntProperty(default=0)
|
||||
flatcolor_count: bpy.props.IntProperty(default=0)
|
||||
already_correct_count: bpy.props.IntProperty(default=0)
|
||||
unrecognized_suffix_count: bpy.props.IntProperty(default=0)
|
||||
rename_details: bpy.props.StringProperty(default="")
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
# Title
|
||||
layout.label(text="Rename by Material - Summary", icon='INFO')
|
||||
layout.separator()
|
||||
|
||||
# Statistics box
|
||||
box = layout.box()
|
||||
col = box.column(align=True)
|
||||
col.label(text=f"Total selected images: {self.total_selected}")
|
||||
col.label(text=f"Successfully renamed: {self.renamed_count}", icon='CHECKMARK')
|
||||
|
||||
if self.already_correct_count > 0:
|
||||
col.label(text=f"Already correctly named: {self.already_correct_count}", icon='CHECKMARK')
|
||||
if self.shared_count > 0:
|
||||
col.label(text=f"Shared images skipped: {self.shared_count}", icon='RADIOBUT_OFF')
|
||||
if self.unused_count > 0:
|
||||
col.label(text=f"Unused images skipped: {self.unused_count}", icon='RADIOBUT_OFF')
|
||||
if self.cc3iid_count > 0:
|
||||
col.label(text=f"CC3 ID textures skipped: {self.cc3iid_count}", icon='RADIOBUT_OFF')
|
||||
if self.flatcolor_count > 0:
|
||||
col.label(text=f"Flat colors skipped: {self.flatcolor_count}", icon='RADIOBUT_OFF')
|
||||
if self.unrecognized_suffix_count > 0:
|
||||
col.label(text=f"Unrecognized suffixes skipped: {self.unrecognized_suffix_count}", icon='RADIOBUT_OFF')
|
||||
|
||||
# Show detailed rename information if available
|
||||
if self.rename_details:
|
||||
layout.separator()
|
||||
box = layout.box()
|
||||
box.label(text="Renamed Images:", icon='FILE_TEXT')
|
||||
|
||||
# Split the details by lines and show each one
|
||||
lines = self.rename_details.split('\n')
|
||||
for line in lines[:10]: # Limit to first 10 to avoid overly long dialogs
|
||||
if line.strip():
|
||||
box.label(text=line)
|
||||
|
||||
if len(lines) > 10:
|
||||
box.label(text=f"... and {len(lines) - 10} more")
|
||||
|
||||
def execute(self, context):
|
||||
return {'FINISHED'}
|
||||
|
||||
def invoke(self, context, event):
|
||||
return context.window_manager.invoke_popup(self, width=500)
|
||||
|
||||
class Rename_images_by_mat(bpy.types.Operator):
|
||||
bl_idname = "bst.rename_images_by_mat"
|
||||
bl_label = "Rename Images by Material"
|
||||
bl_description = "Rename selected images based on their material usage, preserving texture type suffixes"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
# Get selected images
|
||||
selected_images = [img for img in bpy.data.images if hasattr(img, "bst_selected") and img.bst_selected]
|
||||
|
||||
if not selected_images:
|
||||
self.report({'WARNING'}, "No images selected for renaming")
|
||||
return {'CANCELLED'}
|
||||
|
||||
# Get image to material mapping
|
||||
image_to_materials = self.get_image_material_mapping(selected_images)
|
||||
|
||||
renamed_count = 0
|
||||
shared_count = 0
|
||||
unused_count = 0
|
||||
cc3iid_count = 0 # Track CC3 ID textures
|
||||
flatcolor_count = 0 # Track flat color textures
|
||||
already_correct_count = 0 # Track images already correctly named
|
||||
unrecognized_suffix_count = 0 # Track images with unrecognized suffixes
|
||||
renamed_list = [] # Track renamed images for debug
|
||||
unrecognized_list = [] # Track images with unrecognized suffixes
|
||||
|
||||
for img in selected_images:
|
||||
# Skip CC3 ID textures (ignore case)
|
||||
if img.name.lower().startswith('cc3iid'):
|
||||
cc3iid_count += 1
|
||||
print(f"DEBUG: Skipped CC3 ID texture: {img.name}")
|
||||
continue
|
||||
|
||||
# Skip flat color textures (start with #)
|
||||
if img.name.startswith('#'):
|
||||
flatcolor_count += 1
|
||||
print(f"DEBUG: Skipped flat color texture: {img.name}")
|
||||
continue
|
||||
|
||||
materials = image_to_materials.get(img.name, [])
|
||||
|
||||
if len(materials) == 0:
|
||||
# Unused image - skip
|
||||
unused_count += 1
|
||||
print(f"DEBUG: Skipped unused image: {img.name}")
|
||||
continue
|
||||
elif len(materials) == 1:
|
||||
# Single material usage - check suffix recognition
|
||||
material_name = materials[0]
|
||||
suffix = self.extract_texture_suffix(img.name)
|
||||
original_name = img.name
|
||||
|
||||
# Skip images with unrecognized suffixes (only if they have a potential suffix pattern)
|
||||
if suffix is None and self.has_potential_suffix(img.name):
|
||||
unrecognized_suffix_count += 1
|
||||
unrecognized_list.append(img.name)
|
||||
print(f"DEBUG: Skipped image with unrecognized suffix: {img.name}")
|
||||
continue
|
||||
|
||||
if suffix:
|
||||
# Capitalize the suffix properly
|
||||
capitalized_suffix = self.capitalize_suffix(suffix)
|
||||
expected_name = f"{material_name}_{capitalized_suffix}"
|
||||
else:
|
||||
# No suffix detected, use material name only
|
||||
expected_name = material_name
|
||||
|
||||
# Check if the image is already correctly named
|
||||
if img.name == expected_name:
|
||||
already_correct_count += 1
|
||||
print(f"DEBUG: Skipped already correctly named: {img.name}")
|
||||
continue
|
||||
|
||||
# Avoid duplicate names
|
||||
new_name = self.ensure_unique_name(expected_name)
|
||||
|
||||
img.name = new_name
|
||||
renamed_count += 1
|
||||
renamed_list.append((original_name, new_name, material_name, capitalized_suffix if suffix else None))
|
||||
print(f"DEBUG: Renamed '{original_name}' → '{new_name}' (Material: {material_name}, Suffix: {capitalized_suffix if suffix else 'none'})")
|
||||
else:
|
||||
# Shared across multiple materials - skip
|
||||
shared_count += 1
|
||||
print(f"DEBUG: Skipped shared image: {img.name} (used by {len(materials)} materials: {', '.join(materials[:3])}{'...' if len(materials) > 3 else ''})")
|
||||
|
||||
# Console debug summary (keep for development)
|
||||
print(f"\n=== RENAME BY MATERIAL SUMMARY ===")
|
||||
print(f"Total selected: {len(selected_images)}")
|
||||
print(f"Renamed: {renamed_count}")
|
||||
print(f"Already correct (skipped): {already_correct_count}")
|
||||
print(f"Shared (skipped): {shared_count}")
|
||||
print(f"Unused (skipped): {unused_count}")
|
||||
print(f"CC3 ID textures (skipped): {cc3iid_count}")
|
||||
print(f"Flat colors (skipped): {flatcolor_count}")
|
||||
print(f"Unrecognized suffixes (skipped): {unrecognized_suffix_count}")
|
||||
|
||||
if renamed_list:
|
||||
print(f"\nDetailed rename log:")
|
||||
for original, new, material, suffix in renamed_list:
|
||||
suffix_info = f" (suffix: {suffix})" if suffix else " (no suffix)"
|
||||
print(f" '{original}' → '{new}' for material '{material}'{suffix_info}")
|
||||
|
||||
if unrecognized_list:
|
||||
print(f"\nImages with unrecognized suffixes:")
|
||||
for img_name in unrecognized_list:
|
||||
print(f" '{img_name}'")
|
||||
|
||||
print(f"===================================\n")
|
||||
|
||||
# Show popup summary dialog
|
||||
self.show_summary_dialog(context, len(selected_images), renamed_count, shared_count, unused_count, cc3iid_count, flatcolor_count, already_correct_count, unrecognized_suffix_count, renamed_list)
|
||||
|
||||
return {'FINISHED'}
|
||||
|
||||
def show_summary_dialog(self, context, total_selected, renamed_count, shared_count, unused_count, cc3iid_count, flatcolor_count, already_correct_count, unrecognized_suffix_count, renamed_list):
|
||||
"""Show a popup dialog with the rename summary"""
|
||||
# Prepare detailed rename information for display
|
||||
details_text = ""
|
||||
if renamed_list:
|
||||
for original, new, material, suffix in renamed_list:
|
||||
suffix_info = f" ({suffix})" if suffix else ""
|
||||
details_text += f"'{original}' → '{new}'{suffix_info}\n"
|
||||
|
||||
# Invoke the summary dialog
|
||||
dialog = bpy.ops.bst.rename_summary_dialog('INVOKE_DEFAULT',
|
||||
total_selected=total_selected,
|
||||
renamed_count=renamed_count,
|
||||
shared_count=shared_count,
|
||||
unused_count=unused_count,
|
||||
cc3iid_count=cc3iid_count,
|
||||
flatcolor_count=flatcolor_count,
|
||||
already_correct_count=already_correct_count,
|
||||
unrecognized_suffix_count=unrecognized_suffix_count,
|
||||
rename_details=details_text.strip())
|
||||
|
||||
def get_image_material_mapping(self, images):
|
||||
"""Create mapping of image names to materials that use them"""
|
||||
image_to_materials = {}
|
||||
|
||||
# Initialize mapping
|
||||
for img in images:
|
||||
image_to_materials[img.name] = []
|
||||
|
||||
# Check all materials for image usage
|
||||
for material in bpy.data.materials:
|
||||
if not material.use_nodes:
|
||||
continue
|
||||
|
||||
material_images = set()
|
||||
|
||||
# Find all image texture nodes in this material
|
||||
for node in material.node_tree.nodes:
|
||||
if node.type == 'TEX_IMAGE' and node.image:
|
||||
material_images.add(node.image.name)
|
||||
|
||||
# Add this material to each image's usage list
|
||||
for img_name in material_images:
|
||||
if img_name in image_to_materials:
|
||||
image_to_materials[img_name].append(material.name)
|
||||
|
||||
return image_to_materials
|
||||
|
||||
def extract_texture_suffix(self, name):
|
||||
"""Extract texture type suffix from image name (case-insensitive)"""
|
||||
# Comprehensive list of texture suffixes
|
||||
suffixes = [
|
||||
# Standard PBR suffixes
|
||||
'diffuse', 'basecolor', 'base_color', 'albedo', 'color', 'col',
|
||||
'normal', 'norm', 'nrm', 'bump',
|
||||
'roughness', 'rough', 'rgh',
|
||||
'metallic', 'metal', 'mtl',
|
||||
'specular', 'spec', 'spc',
|
||||
'ao', 'ambient_occlusion', 'ambientocclusion', 'occlusion',
|
||||
'gradao',
|
||||
'height', 'displacement', 'disp', 'displace',
|
||||
'opacity', 'alpha', 'mask',
|
||||
'emission', 'emissive', 'emit',
|
||||
'subsurface', 'sss', 'transmission',
|
||||
|
||||
# Character Creator / iClone suffixes
|
||||
'base', 'diffusemap', 'normalmap', 'roughnessmap', 'metallicmap',
|
||||
'aomap', 'opacitymap', 'emissionmap', 'heightmap', 'displacementmap',
|
||||
'detail_normal', 'detail_diffuse', 'detail_mask',
|
||||
'blend', 'id', 'cavity', 'curvature', 'transmap', 'rgbamask', 'sssmap', 'micronmask',
|
||||
'bcbmap', 'mnaomask', 'specmask', 'micron', 'cfulcmask', 'nmuilmask', 'nbmap', 'enmask', 'blend_multiply',
|
||||
|
||||
# Hair-related compound suffixes (no spaces)
|
||||
'hairflowmap', 'hairidmap', 'hairrootmap', 'hairdepthmap',
|
||||
'flowmap', 'idmap', 'rootmap', 'depthmap',
|
||||
|
||||
# Wrinkle map suffixes (Character Creator)
|
||||
'wrinkle_normal1', 'wrinkle_normal2', 'wrinkle_normal3',
|
||||
'wrinkle_roughness1', 'wrinkle_roughness2', 'wrinkle_roughness3',
|
||||
'wrinkle_diffuse1', 'wrinkle_diffuse2', 'wrinkle_diffuse3',
|
||||
'wrinkle_mask1', 'wrinkle_mask2', 'wrinkle_mask3',
|
||||
'wrinkle_flow1', 'wrinkle_flow2', 'wrinkle_flow3',
|
||||
|
||||
# Character Creator pack suffixes (with spaces)
|
||||
'flow pack', 'msmnao pack', 'roughness pack', 'sstm pack',
|
||||
'flow_pack', 'msmnao_pack', 'roughness_pack', 'sstm_pack',
|
||||
|
||||
# Hair-related multi-word suffixes (spaces)
|
||||
'hair flow map', 'hair id map', 'hair root map', 'hair depth map',
|
||||
'flow map', 'id map', 'root map', 'depth map',
|
||||
|
||||
# Additional common variations
|
||||
'tex', 'map', 'img', 'texture',
|
||||
'd', 'n', 'r', 'm', 's', 'a', 'h', 'o', 'e' # Single letter abbreviations
|
||||
]
|
||||
|
||||
# Remove file extension first
|
||||
base_name = re.sub(r'\.[^.]+$', '', name)
|
||||
|
||||
# Sort suffixes by length (longest first) to prioritize more specific matches
|
||||
sorted_suffixes = sorted(suffixes, key=len, reverse=True)
|
||||
|
||||
# First, try to find multi-word suffixes with spaces (case-insensitive)
|
||||
for suffix in sorted_suffixes:
|
||||
if ' ' in suffix: # Multi-word suffix
|
||||
# Pattern: ends with space + suffix
|
||||
pattern = rf'\s+({re.escape(suffix)})$'
|
||||
match = re.search(pattern, base_name, re.IGNORECASE)
|
||||
if match:
|
||||
return match.group(1).lower()
|
||||
|
||||
# Pattern: ends with suffix (no space separator, but exact match)
|
||||
if base_name.lower().endswith(suffix.lower()) and len(base_name) > len(suffix):
|
||||
# Check if there's a word boundary before the suffix
|
||||
prefix_end = len(base_name) - len(suffix)
|
||||
if prefix_end > 0 and base_name[prefix_end - 1] in ' _-':
|
||||
return suffix.lower()
|
||||
|
||||
# Then try single-word suffixes with traditional separators
|
||||
for suffix in sorted_suffixes:
|
||||
if ' ' not in suffix: # Single word suffix
|
||||
# Pattern: ends with _suffix or -suffix or .suffix
|
||||
pattern = rf'[._-]({re.escape(suffix)})$'
|
||||
match = re.search(pattern, base_name, re.IGNORECASE)
|
||||
if match:
|
||||
return match.group(1).lower()
|
||||
|
||||
# Check for numeric suffixes (like _01, _02, etc.)
|
||||
numeric_match = re.search(r'[._-](\d+)$', base_name)
|
||||
if numeric_match:
|
||||
return numeric_match.group(1)
|
||||
|
||||
return None
|
||||
|
||||
def ensure_unique_name(self, proposed_name):
|
||||
"""Ensure the proposed name is unique among all images"""
|
||||
if proposed_name not in bpy.data.images:
|
||||
return proposed_name
|
||||
|
||||
# If name exists, add numerical suffix
|
||||
counter = 1
|
||||
while f"{proposed_name}.{counter:03d}" in bpy.data.images:
|
||||
counter += 1
|
||||
|
||||
return f"{proposed_name}.{counter:03d}"
|
||||
|
||||
def capitalize_suffix(self, suffix):
|
||||
"""Properly capitalize texture type suffixes with correct formatting"""
|
||||
# Dictionary of common texture suffixes with proper capitalization
|
||||
suffix_mapping = {
|
||||
# Standard PBR suffixes
|
||||
'diffuse': 'Diffuse',
|
||||
'basecolor': 'BaseColor',
|
||||
'base_color': 'BaseColor',
|
||||
'albedo': 'Albedo',
|
||||
'color': 'Color',
|
||||
'col': 'Color',
|
||||
|
||||
'normal': 'Normal',
|
||||
'norm': 'Normal',
|
||||
'nrm': 'Normal',
|
||||
'bump': 'Bump',
|
||||
|
||||
'roughness': 'Roughness',
|
||||
'rough': 'Roughness',
|
||||
'rgh': 'Roughness',
|
||||
|
||||
'metallic': 'Metallic',
|
||||
'metal': 'Metallic',
|
||||
'mtl': 'Metallic',
|
||||
|
||||
'specular': 'Specular',
|
||||
'spec': 'Specular',
|
||||
'spc': 'Specular',
|
||||
|
||||
'ao': 'AO',
|
||||
'ambient_occlusion': 'AmbientOcclusion',
|
||||
'ambientocclusion': 'AmbientOcclusion',
|
||||
'occlusion': 'Occlusion',
|
||||
'gradao': 'GradAO',
|
||||
|
||||
'height': 'Height',
|
||||
'displacement': 'Displacement',
|
||||
'disp': 'Displacement',
|
||||
'displace': 'Displacement',
|
||||
|
||||
'opacity': 'Opacity',
|
||||
'alpha': 'Alpha',
|
||||
'mask': 'Mask',
|
||||
'transmap': 'TransMap',
|
||||
|
||||
'emission': 'Emission',
|
||||
'emissive': 'Emission',
|
||||
'emit': 'Emission',
|
||||
|
||||
'subsurface': 'Subsurface',
|
||||
'sss': 'SSS',
|
||||
'transmission': 'Transmission',
|
||||
|
||||
# Character Creator / iClone suffixes
|
||||
'base': 'Base',
|
||||
'diffusemap': 'DiffuseMap',
|
||||
'normalmap': 'NormalMap',
|
||||
'roughnessmap': 'RoughnessMap',
|
||||
'metallicmap': 'MetallicMap',
|
||||
'aomap': 'AOMap',
|
||||
'opacitymap': 'OpacityMap',
|
||||
'emissionmap': 'EmissionMap',
|
||||
'heightmap': 'HeightMap',
|
||||
'displacementmap': 'DisplacementMap',
|
||||
'detail_normal': 'DetailNormal',
|
||||
'detail_diffuse': 'DetailDiffuse',
|
||||
'detail_mask': 'DetailMask',
|
||||
'blend': 'Blend',
|
||||
'id': 'ID',
|
||||
'cavity': 'Cavity',
|
||||
'curvature': 'Curvature',
|
||||
'transmap': 'TransMap',
|
||||
'rgbamask': 'RGBAMask',
|
||||
'sssmap': 'SSSMap',
|
||||
'micronmask': 'MicroNMask',
|
||||
'bcbmap': 'BCBMap',
|
||||
'mnaomask': 'MNAOMask',
|
||||
'specmask': 'SpecMask',
|
||||
'micron': 'MicroN',
|
||||
'cfulcmask': 'CFULCMask',
|
||||
'nmuilmask': 'NMUILMask',
|
||||
'nbmap': 'NBMap',
|
||||
'enmask': 'ENMask',
|
||||
'blend_multiply': 'Blend_Multiply',
|
||||
|
||||
# Hair-related compound suffixes (no spaces)
|
||||
'hairflowmap': 'HairFlowMap',
|
||||
'hairidmap': 'HairIDMap',
|
||||
'hairrootmap': 'HairRootMap',
|
||||
'hairdepthmap': 'HairDepthMap',
|
||||
'flowmap': 'FlowMap',
|
||||
'idmap': 'IDMap',
|
||||
'rootmap': 'RootMap',
|
||||
'depthmap': 'DepthMap',
|
||||
|
||||
# Wrinkle map suffixes (Character Creator)
|
||||
'wrinkle_normal1': 'Wrinkle_Normal1',
|
||||
'wrinkle_normal2': 'Wrinkle_Normal2',
|
||||
'wrinkle_normal3': 'Wrinkle_Normal3',
|
||||
'wrinkle_roughness1': 'Wrinkle_Roughness1',
|
||||
'wrinkle_roughness2': 'Wrinkle_Roughness2',
|
||||
'wrinkle_roughness3': 'Wrinkle_Roughness3',
|
||||
'wrinkle_diffuse1': 'Wrinkle_Diffuse1',
|
||||
'wrinkle_diffuse2': 'Wrinkle_Diffuse2',
|
||||
'wrinkle_diffuse3': 'Wrinkle_Diffuse3',
|
||||
'wrinkle_mask1': 'Wrinkle_Mask1',
|
||||
'wrinkle_mask2': 'Wrinkle_Mask2',
|
||||
'wrinkle_mask3': 'Wrinkle_Mask3',
|
||||
'wrinkle_flow1': 'Wrinkle_Flow1',
|
||||
'wrinkle_flow2': 'Wrinkle_Flow2',
|
||||
'wrinkle_flow3': 'Wrinkle_Flow3',
|
||||
|
||||
# Character Creator pack suffixes (with spaces)
|
||||
'flow pack': 'Flow Pack',
|
||||
'msmnao pack': 'MSMNAO Pack',
|
||||
'roughness pack': 'Roughness Pack',
|
||||
'sstm pack': 'SSTM Pack',
|
||||
'flow_pack': 'Flow_Pack',
|
||||
'msmnao_pack': 'MSMNAO_Pack',
|
||||
'roughness_pack': 'Roughness_Pack',
|
||||
'sstm_pack': 'SSTM_Pack',
|
||||
|
||||
# Hair-related multi-word suffixes
|
||||
'hair flow map': 'HairFlowMap',
|
||||
'hair id map': 'HairIDMap',
|
||||
'hair root map': 'HairRootMap',
|
||||
'hair depth map': 'HairDepthMap',
|
||||
'flow map': 'FlowMap',
|
||||
'id map': 'IDMap',
|
||||
'root map': 'RootMap',
|
||||
'depth map': 'DepthMap',
|
||||
|
||||
# Additional common variations
|
||||
'tex': 'Texture',
|
||||
'map': 'Map',
|
||||
'img': 'Image',
|
||||
'texture': 'Texture',
|
||||
|
||||
# Single letter abbreviations
|
||||
'd': 'Diffuse',
|
||||
'n': 'Normal',
|
||||
'r': 'Roughness',
|
||||
'm': 'Metallic',
|
||||
's': 'Specular',
|
||||
'a': 'Alpha',
|
||||
'h': 'Height',
|
||||
'o': 'Occlusion',
|
||||
'e': 'Emission'
|
||||
}
|
||||
|
||||
# Get the proper capitalization from mapping, or capitalize first letter as fallback
|
||||
return suffix_mapping.get(suffix.lower(), suffix.capitalize())
|
||||
|
||||
def has_potential_suffix(self, name):
|
||||
"""Check if the image name has a potential suffix pattern that we should try to recognize"""
|
||||
# Remove file extension first
|
||||
base_name = re.sub(r'\.[^.]+$', '', name)
|
||||
|
||||
# Check for common suffix patterns: _something, -something, .something, or space something
|
||||
suffix_patterns = [
|
||||
r'[._-][a-zA-Z0-9]+$', # Underscore, dot, or dash followed by alphanumeric
|
||||
r'\s+[a-zA-Z0-9\s]+$', # Space followed by alphanumeric (for multi-word suffixes)
|
||||
]
|
||||
|
||||
for pattern in suffix_patterns:
|
||||
if re.search(pattern, base_name):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
# Registration classes - need to register both operators
|
||||
classes = (
|
||||
RENAME_OT_summary_dialog,
|
||||
Rename_images_by_mat,
|
||||
)
|
||||
|
||||
def register():
|
||||
for cls in classes:
|
||||
bpy.utils.register_class(cls)
|
||||
|
||||
def unregister():
|
||||
for cls in reversed(classes):
|
||||
bpy.utils.unregister_class(cls)
|
||||
|
||||
+87
@@ -0,0 +1,87 @@
|
||||
import bpy
|
||||
|
||||
class ConvertRelationsToConstraint(bpy.types.Operator):
|
||||
"""Convert regular parenting to Child Of constraints for all selected objects"""
|
||||
bl_idname = "bst.convert_relations_to_constraint"
|
||||
bl_label = "Convert Relations to Constraint"
|
||||
bl_description = "Convert regular parenting relationships to Child Of constraints for selected objects"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
result = convert_relations_to_constraint()
|
||||
if result:
|
||||
self.report({'INFO'}, f"Converted {result} objects to Child Of constraints")
|
||||
else:
|
||||
self.report({'WARNING'}, "No objects with parents found in selection")
|
||||
return {'FINISHED'}
|
||||
|
||||
def convert_relations_to_constraint():
|
||||
"""Convert regular parenting to Child Of constraints for all selected objects"""
|
||||
|
||||
# Get all selected objects
|
||||
selected_objects = bpy.context.selected_objects
|
||||
|
||||
if not selected_objects:
|
||||
print("No objects selected!")
|
||||
return 0
|
||||
|
||||
print(f"Converting parenting to Child Of constraints for {len(selected_objects)} objects...")
|
||||
|
||||
converted_count = 0
|
||||
|
||||
for obj in selected_objects:
|
||||
# Check if object has a parent
|
||||
if obj.parent is None:
|
||||
print(f"Skipping {obj.name}: No parent found")
|
||||
continue
|
||||
|
||||
# Store bone information if parented to a bone
|
||||
parent_bone = obj.parent_bone if obj.parent_bone else None
|
||||
bone_info = f" (bone: {parent_bone})" if parent_bone else ""
|
||||
print(f"Processing {obj.name} -> {obj.parent.name}{bone_info}")
|
||||
|
||||
# Store original parent and current world matrix
|
||||
original_parent = obj.parent
|
||||
world_matrix = obj.matrix_world.copy()
|
||||
|
||||
# Remove the parent relationship
|
||||
obj.parent = None
|
||||
obj.parent_bone = "" # Clear the bone reference
|
||||
|
||||
# Add Child Of constraint
|
||||
child_of_constraint = obj.constraints.new(type='CHILD_OF')
|
||||
child_of_constraint.name = f"Child_Of_{original_parent.name}"
|
||||
child_of_constraint.target = original_parent
|
||||
|
||||
# Transfer bone information to constraint subtarget
|
||||
if parent_bone:
|
||||
child_of_constraint.subtarget = parent_bone
|
||||
print(f" ✓ Transferred bone target: {parent_bone}")
|
||||
|
||||
# Set the inverse matrix properly to maintain world position
|
||||
# This is equivalent to clicking "Set Inverse" in the UI
|
||||
child_of_constraint.inverse_matrix = original_parent.matrix_world.inverted()
|
||||
|
||||
# Restore the original world position
|
||||
obj.matrix_world = world_matrix
|
||||
|
||||
# Set the constraint to be active
|
||||
child_of_constraint.influence = 1.0
|
||||
|
||||
converted_count += 1
|
||||
print(f" ✓ Converted {obj.name} to Child Of constraint")
|
||||
|
||||
print(f"\nConversion complete! Converted {converted_count} objects.")
|
||||
|
||||
# Report remaining parented objects
|
||||
remaining_parented = [obj for obj in bpy.context.selected_objects if obj.parent is not None]
|
||||
if remaining_parented:
|
||||
print(f"\nObjects that still have parents (not converted):")
|
||||
for obj in remaining_parented:
|
||||
print(f" - {obj.name} -> {obj.parent.name}")
|
||||
|
||||
return converted_count
|
||||
|
||||
# Run the conversion
|
||||
if __name__ == "__main__":
|
||||
convert_relations_to_constraint()
|
||||
+47
@@ -0,0 +1,47 @@
|
||||
import bpy
|
||||
from bpy.types import Operator
|
||||
|
||||
class CreateOrthoCamera(Operator):
|
||||
"""Create an orthographic camera with predefined settings"""
|
||||
bl_idname = "bst.create_ortho_camera"
|
||||
bl_label = "Create Ortho Camera"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
# Create a new camera
|
||||
bpy.ops.object.camera_add()
|
||||
camera = context.active_object
|
||||
|
||||
# Set camera to orthographic
|
||||
camera.data.type = 'ORTHO'
|
||||
camera.data.ortho_scale = 1.8 # Set orthographic scale
|
||||
|
||||
# Set camera position
|
||||
camera.location = (0, -2, 1) # x=0, y=-2m, z=1m
|
||||
|
||||
# Set camera rotation (90 degrees around X axis)
|
||||
camera.rotation_euler = (1.5708, 0, 0) # 90 degrees in radians
|
||||
|
||||
# Get or create camera collection
|
||||
camera_collection = bpy.data.collections.get("Camera")
|
||||
if not camera_collection:
|
||||
camera_collection = bpy.data.collections.new("Camera")
|
||||
context.scene.collection.children.link(camera_collection)
|
||||
|
||||
# Move camera to camera collection
|
||||
# First unlink from current collection
|
||||
for collection in camera.users_collection:
|
||||
collection.objects.unlink(camera)
|
||||
# Then link to camera collection
|
||||
camera_collection.objects.link(camera)
|
||||
|
||||
return {'FINISHED'}
|
||||
|
||||
def register():
|
||||
bpy.utils.register_class(CreateOrthoCamera)
|
||||
|
||||
def unregister():
|
||||
bpy.utils.unregister_class(CreateOrthoCamera)
|
||||
|
||||
if __name__ == "__main__":
|
||||
register()
|
||||
+39
@@ -0,0 +1,39 @@
|
||||
import bpy
|
||||
|
||||
class DeleteSingleKeyframeActions(bpy.types.Operator):
|
||||
"""Delete actions that have no keyframes, only one keyframe, or all keyframes on the same frame"""
|
||||
bl_idname = "bst.delete_single_keyframe_actions"
|
||||
bl_label = "Delete Single Keyframe Actions"
|
||||
bl_description = "Delete actions with unwanted keyframe patterns (no keyframes, single keyframe, or all keyframes on same frame)"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
actions = bpy.data.actions
|
||||
actions_to_delete = []
|
||||
|
||||
for action in actions:
|
||||
keyframe_frames = set()
|
||||
total_keyframes = 0
|
||||
for fcurve in action.fcurves:
|
||||
for kf in fcurve.keyframe_points:
|
||||
keyframe_frames.add(kf.co[0])
|
||||
total_keyframes += 1
|
||||
|
||||
# No keyframes
|
||||
if total_keyframes == 0:
|
||||
actions_to_delete.append(action)
|
||||
# Only one keyframe
|
||||
elif total_keyframes == 1:
|
||||
actions_to_delete.append(action)
|
||||
# All keyframes on the same frame
|
||||
elif len(keyframe_frames) == 1:
|
||||
actions_to_delete.append(action)
|
||||
|
||||
deleted_count = 0
|
||||
for action in actions_to_delete:
|
||||
print(f"Deleting action '{action.name}' (unwanted keyframe pattern)")
|
||||
bpy.data.actions.remove(action)
|
||||
deleted_count += 1
|
||||
|
||||
self.report({'INFO'}, f"Deleted {deleted_count} unwanted actions")
|
||||
return {'FINISHED'}
|
||||
+157
@@ -0,0 +1,157 @@
|
||||
import bpy
|
||||
|
||||
class MATERIAL_USERS_OT_summary_dialog(bpy.types.Operator):
|
||||
"""Show material users analysis in a popup dialog"""
|
||||
bl_idname = "bst.material_users_summary_dialog"
|
||||
bl_label = "Material Users Summary"
|
||||
bl_options = {'REGISTER', 'INTERNAL'}
|
||||
|
||||
# Properties to store summary data
|
||||
material_name: bpy.props.StringProperty(default="")
|
||||
users_count: bpy.props.IntProperty(default=0)
|
||||
fake_user: bpy.props.BoolProperty(default=False)
|
||||
object_users: bpy.props.StringProperty(default="")
|
||||
node_users: bpy.props.StringProperty(default="")
|
||||
material_node_users: bpy.props.StringProperty(default="")
|
||||
total_user_count: bpy.props.IntProperty(default=0)
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
# Title
|
||||
layout.label(text=f"Material Users - '{self.material_name}'", icon='MATERIAL')
|
||||
layout.separator()
|
||||
|
||||
# Basic info box
|
||||
box = layout.box()
|
||||
col = box.column(align=True)
|
||||
col.label(text=f"Blender Users Count: {self.users_count}")
|
||||
col.label(text=f"Fake User: {'Yes' if self.fake_user else 'No'}")
|
||||
col.label(text=f"Total Found Users: {self.total_user_count}")
|
||||
|
||||
layout.separator()
|
||||
|
||||
# Object users section
|
||||
if self.object_users:
|
||||
layout.label(text="Object Users:", icon='OBJECT_DATA')
|
||||
objects_box = layout.box()
|
||||
objects_col = objects_box.column(align=True)
|
||||
for obj_name in self.object_users.split('|'):
|
||||
if obj_name.strip():
|
||||
objects_col.label(text=f"• {obj_name}", icon='RIGHTARROW_THIN')
|
||||
else:
|
||||
layout.label(text="Object Users: None", icon='OBJECT_DATA')
|
||||
|
||||
# Node tree users section
|
||||
if self.node_users:
|
||||
layout.separator()
|
||||
layout.label(text="Node Tree Users:", icon='NODETREE')
|
||||
nodes_box = layout.box()
|
||||
nodes_col = nodes_box.column(align=True)
|
||||
for node_ref in self.node_users.split('|'):
|
||||
if node_ref.strip():
|
||||
nodes_col.label(text=f"• {node_ref}", icon='RIGHTARROW_THIN')
|
||||
|
||||
# Material node tree users section
|
||||
if self.material_node_users:
|
||||
layout.separator()
|
||||
layout.label(text="Material Node Tree Users:", icon='MATERIAL')
|
||||
mat_nodes_box = layout.box()
|
||||
mat_nodes_col = mat_nodes_box.column(align=True)
|
||||
for mat_node_ref in self.material_node_users.split('|'):
|
||||
if mat_node_ref.strip():
|
||||
mat_nodes_col.label(text=f"• {mat_node_ref}", icon='RIGHTARROW_THIN')
|
||||
|
||||
layout.separator()
|
||||
|
||||
def execute(self, context):
|
||||
return {'FINISHED'}
|
||||
|
||||
def invoke(self, context, event):
|
||||
return context.window_manager.invoke_popup(self, width=500)
|
||||
|
||||
class FindMaterialUsers(bpy.types.Operator):
|
||||
"""Find all users of a specified material and display detailed information"""
|
||||
bl_idname = "bst.find_material_users"
|
||||
bl_label = "Find Material Users"
|
||||
bl_description = "Find and display all users of a specified material"
|
||||
bl_options = {'REGISTER'}
|
||||
|
||||
material_name: bpy.props.StringProperty(
|
||||
name="Material",
|
||||
description="Name of the material to analyze",
|
||||
default="",
|
||||
)
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
# Set the material if we have a name
|
||||
if self.material_name and self.material_name in bpy.data.materials:
|
||||
context.scene.bst_temp_material = bpy.data.materials[self.material_name]
|
||||
|
||||
# Use template_ID to get the proper material selector (without new button)
|
||||
layout.template_ID(context.scene, "bst_temp_material", text="Material")
|
||||
|
||||
def execute(self, context):
|
||||
# Get the material from the temp property
|
||||
material = getattr(context.scene, 'bst_temp_material', None)
|
||||
|
||||
if not material:
|
||||
self.report({'ERROR'}, "No material selected")
|
||||
return {'CANCELLED'}
|
||||
|
||||
# Update our material_name property
|
||||
self.material_name = material.name
|
||||
|
||||
# Check objects
|
||||
object_users = []
|
||||
for obj in bpy.data.objects:
|
||||
if obj.material_slots:
|
||||
for slot in obj.material_slots:
|
||||
if slot.material == material:
|
||||
object_users.append(obj.name)
|
||||
break
|
||||
|
||||
# Check node groups more thoroughly
|
||||
node_users = []
|
||||
for node_tree in bpy.data.node_groups:
|
||||
for node in node_tree.nodes:
|
||||
# Check material nodes
|
||||
if hasattr(node, 'material') and node.material == material:
|
||||
node_users.append(f"{node_tree.name}.{node.name}")
|
||||
# Check material input sockets
|
||||
for input_socket in node.inputs:
|
||||
if hasattr(input_socket, 'default_value') and hasattr(input_socket.default_value, 'name'):
|
||||
if input_socket.default_value.name == material.name:
|
||||
node_users.append(f"{node_tree.name}.{node.name}.{input_socket.name}")
|
||||
|
||||
# Check material node trees
|
||||
material_node_users = []
|
||||
for mat in bpy.data.materials:
|
||||
if mat.node_tree:
|
||||
for node in mat.node_tree.nodes:
|
||||
if hasattr(node, 'material') and node.material == material:
|
||||
material_node_users.append(f"{mat.name}.{node.name}")
|
||||
|
||||
# Show summary dialog
|
||||
self.show_summary_dialog(context, material, object_users, node_users, material_node_users)
|
||||
return {'FINISHED'}
|
||||
|
||||
def show_summary_dialog(self, context, material, object_users, node_users, material_node_users):
|
||||
"""Show the material users summary in a popup dialog"""
|
||||
total_user_count = len(object_users) + len(node_users) + len(material_node_users)
|
||||
|
||||
# Create and configure the summary dialog
|
||||
dialog_op = bpy.ops.bst.material_users_summary_dialog
|
||||
dialog_op('INVOKE_DEFAULT',
|
||||
material_name=material.name,
|
||||
users_count=material.users,
|
||||
fake_user=material.use_fake_user,
|
||||
object_users='|'.join(object_users),
|
||||
node_users='|'.join(node_users),
|
||||
material_node_users='|'.join(material_node_users),
|
||||
total_user_count=total_user_count)
|
||||
|
||||
def invoke(self, context, event):
|
||||
return context.window_manager.invoke_props_dialog(self)
|
||||
+253
@@ -0,0 +1,253 @@
|
||||
import bpy
|
||||
import bmesh
|
||||
from mathutils import Color
|
||||
|
||||
def rgb_to_hex(r, g, b, a=1.0):
|
||||
"""Convert RGBA values (0-1 range) to hex color code."""
|
||||
# Convert to 0-255 range and format as hex
|
||||
r_int = int(round(r * 255))
|
||||
g_int = int(round(g * 255))
|
||||
b_int = int(round(b * 255))
|
||||
a_int = int(round(a * 255))
|
||||
|
||||
# If alpha is full (255), use RGB format, otherwise use RGBA
|
||||
if a_int == 255:
|
||||
return f"#{r_int:02X}{g_int:02X}{b_int:02X}"
|
||||
else:
|
||||
return f"#{r_int:02X}{g_int:02X}{b_int:02X}{a_int:02X}"
|
||||
|
||||
def is_flat_color_image_efficient(image, max_pixels_to_check=10000):
|
||||
"""
|
||||
Efficiently check if an image has all pixels of the same color.
|
||||
|
||||
Args:
|
||||
image: The image to check
|
||||
max_pixels_to_check: Maximum number of pixels to check (for performance)
|
||||
|
||||
Returns:
|
||||
tuple: (is_flat, color) where is_flat is bool and color is RGBA tuple
|
||||
"""
|
||||
if not image or not image.pixels:
|
||||
print(f" DEBUG: No image or no pixels")
|
||||
return False, None
|
||||
|
||||
# Get pixel data
|
||||
pixels = image.pixels[:]
|
||||
|
||||
if len(pixels) == 0:
|
||||
print(f" DEBUG: Empty pixel array")
|
||||
return False, None
|
||||
|
||||
# Images in Blender are typically RGBA, so 4 values per pixel
|
||||
channels = image.channels
|
||||
if channels not in [3, 4]: # RGB or RGBA
|
||||
print(f" DEBUG: Unsupported channels: {channels}")
|
||||
return False, None
|
||||
|
||||
# Get the first pixel color as reference
|
||||
first_pixel = pixels[:channels]
|
||||
print(f" DEBUG: Reference color: {first_pixel}")
|
||||
|
||||
# Calculate total pixels
|
||||
total_pixels = len(pixels) // channels
|
||||
print(f" DEBUG: Total pixels: {total_pixels}")
|
||||
|
||||
# Determine how many pixels to check
|
||||
pixels_to_check = min(total_pixels, max_pixels_to_check)
|
||||
|
||||
# For small images, check every pixel
|
||||
if total_pixels <= max_pixels_to_check:
|
||||
step = 1
|
||||
print(f" DEBUG: Checking all {total_pixels} pixels")
|
||||
else:
|
||||
# For large images, sample evenly across the image
|
||||
step = total_pixels // pixels_to_check
|
||||
print(f" DEBUG: Sampling {pixels_to_check} pixels with step {step}")
|
||||
|
||||
# Check pixels
|
||||
checked_count = 0
|
||||
for i in range(0, total_pixels, step):
|
||||
pixel_start = i * channels
|
||||
current_pixel = pixels[pixel_start:pixel_start + channels]
|
||||
checked_count += 1
|
||||
|
||||
# Compare with reference pixel (exact match)
|
||||
for j in range(channels):
|
||||
if current_pixel[j] != first_pixel[j]:
|
||||
print(f" DEBUG: Pixel {i} differs at channel {j}: {current_pixel[j]} vs {first_pixel[j]}")
|
||||
print(f" DEBUG: Checked {checked_count} pixels before finding difference")
|
||||
return False, None
|
||||
|
||||
print(f" DEBUG: All {checked_count} checked pixels are identical")
|
||||
|
||||
# If we get here, all checked pixels are the same color
|
||||
if channels == 3:
|
||||
return True, (first_pixel[0], first_pixel[1], first_pixel[2], 1.0)
|
||||
else:
|
||||
return True, tuple(first_pixel)
|
||||
|
||||
def is_flat_color_image(image):
|
||||
"""Check if an image has all pixels of the same color."""
|
||||
# Use the efficient version by default
|
||||
return is_flat_color_image_efficient(image, max_pixels_to_check=10000)
|
||||
|
||||
def safe_rename_image(image, new_name):
|
||||
"""Safely rename an image datablock using context override."""
|
||||
try:
|
||||
# Method 1: Try direct assignment first (works in some contexts)
|
||||
image.name = new_name
|
||||
return True
|
||||
except:
|
||||
try:
|
||||
# Method 2: Use context override with outliner
|
||||
for area in bpy.context.screen.areas:
|
||||
if area.type == 'OUTLINER':
|
||||
with bpy.context.temp_override(area=area):
|
||||
image.name = new_name
|
||||
return True
|
||||
except:
|
||||
try:
|
||||
# Method 3: Use bpy.ops with context override
|
||||
# Set the image as active and use the rename operator
|
||||
bpy.context.view_layer.objects.active = None
|
||||
|
||||
# Create a temporary override context
|
||||
override_context = bpy.context.copy()
|
||||
override_context['edit_image'] = image
|
||||
|
||||
with bpy.context.temp_override(**override_context):
|
||||
image.name = new_name
|
||||
return True
|
||||
except:
|
||||
# Method 4: Try using the data API directly with update
|
||||
try:
|
||||
old_name = image.name
|
||||
# Force an update cycle
|
||||
bpy.context.view_layer.update()
|
||||
image.name = new_name
|
||||
bpy.context.view_layer.update()
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
||||
def rename_flat_color_textures():
|
||||
"""Main function to find and rename flat color textures."""
|
||||
renamed_count = 0
|
||||
failed_count = 0
|
||||
processed_count = 0
|
||||
|
||||
print("Scanning for flat color textures...")
|
||||
|
||||
# Store rename operations to perform them in batch
|
||||
rename_operations = []
|
||||
|
||||
for image in bpy.data.images:
|
||||
processed_count += 1
|
||||
|
||||
# Skip if image has no pixel data
|
||||
if not hasattr(image, 'pixels') or len(image.pixels) == 0:
|
||||
print(f"Skipping '{image.name}': No pixel data available")
|
||||
continue
|
||||
|
||||
# Check if image has flat color
|
||||
is_flat, color = is_flat_color_image(image)
|
||||
|
||||
if is_flat and color:
|
||||
# Convert color to hex
|
||||
hex_color = rgb_to_hex(*color)
|
||||
|
||||
# Store original name for logging
|
||||
original_name = image.name
|
||||
|
||||
# Check if name is already a hex color (to avoid renaming again)
|
||||
if not original_name.startswith('#'):
|
||||
rename_operations.append((image, original_name, hex_color, color))
|
||||
else:
|
||||
print(f"Skipping '{original_name}': Already appears to be hex-named")
|
||||
else:
|
||||
print(f"'{image.name}': Not a flat color texture")
|
||||
|
||||
# Perform rename operations
|
||||
print(f"\nPerforming {len(rename_operations)} rename operation(s)...")
|
||||
|
||||
for image, original_name, hex_color, color in rename_operations:
|
||||
success = safe_rename_image(image, hex_color)
|
||||
if success:
|
||||
print(f"Renamed '{original_name}' to '{hex_color}' (Color: RGBA{color})")
|
||||
renamed_count += 1
|
||||
else:
|
||||
print(f"Failed to rename '{original_name}' to '{hex_color}' - Context restriction")
|
||||
failed_count += 1
|
||||
|
||||
print(f"\nSummary:")
|
||||
print(f"Processed: {processed_count} images")
|
||||
print(f"Successfully renamed: {renamed_count} flat color textures")
|
||||
if failed_count > 0:
|
||||
print(f"Failed to rename: {failed_count} textures (try running from Python Console instead)")
|
||||
|
||||
return renamed_count
|
||||
|
||||
def reload_image_pixels():
|
||||
"""Reload pixel data for all images (useful if images aren't loaded)."""
|
||||
print("Reloading pixel data for all images...")
|
||||
|
||||
for image in bpy.data.images:
|
||||
if image.source == 'FILE' and image.filepath:
|
||||
try:
|
||||
image.reload()
|
||||
print(f"Reloaded: {image.name}")
|
||||
except:
|
||||
print(f"Failed to reload: {image.name}")
|
||||
|
||||
# Alternative function for running in restricted contexts
|
||||
def print_rename_suggestions():
|
||||
"""Print suggested renames without actually renaming (for restricted contexts)."""
|
||||
suggestions = []
|
||||
|
||||
print("Scanning for flat color textures (suggestion mode)...")
|
||||
|
||||
for image in bpy.data.images:
|
||||
if not hasattr(image, 'pixels') or len(image.pixels) == 0:
|
||||
continue
|
||||
|
||||
is_flat, color = is_flat_color_image(image)
|
||||
|
||||
if is_flat and color and not image.name.startswith('#'):
|
||||
hex_color = rgb_to_hex(*color)
|
||||
suggestions.append((image.name, hex_color, color))
|
||||
|
||||
if suggestions:
|
||||
print(f"\nFound {len(suggestions)} flat color texture(s) that could be renamed:")
|
||||
print("-" * 60)
|
||||
for original_name, hex_color, color in suggestions:
|
||||
print(f"'{original_name}' -> '{hex_color}' (RGBA{color})")
|
||||
|
||||
print("\nTo actually rename them, run this script from:")
|
||||
print("1. Blender's Python Console, or")
|
||||
print("2. Command line with: blender file.blend --python script.py")
|
||||
else:
|
||||
print("\nNo flat color textures found that need renaming.")
|
||||
|
||||
# Main execution
|
||||
if __name__ == "__main__":
|
||||
print("=" * 50)
|
||||
print("Flat Color Texture Renamer")
|
||||
print("=" * 50)
|
||||
|
||||
# Optional: Reload images to ensure pixel data is available
|
||||
# Uncomment the line below if you want to force reload all images
|
||||
# reload_image_pixels()
|
||||
|
||||
# Try to run the renaming process
|
||||
try:
|
||||
renamed_count = rename_flat_color_textures()
|
||||
|
||||
if renamed_count > 0:
|
||||
print(f"\nSuccessfully renamed {renamed_count} flat color texture(s)!")
|
||||
else:
|
||||
print("\nNo flat color textures found to rename.")
|
||||
except Exception as e:
|
||||
print(f"\nContext restriction detected. Running in suggestion mode...")
|
||||
print_rename_suggestions()
|
||||
|
||||
print("Script completed.")
|
||||
+690
@@ -0,0 +1,690 @@
|
||||
import bpy
|
||||
|
||||
def safe_wgt_removal():
|
||||
"""Safely remove only WGT widget objects that are clearly ghosts"""
|
||||
|
||||
print("="*80)
|
||||
print("CONSERVATIVE WGT GHOST REMOVAL")
|
||||
print("="*80)
|
||||
|
||||
# Find all WGT objects
|
||||
wgt_objects = []
|
||||
for obj in bpy.data.objects:
|
||||
if obj.name.startswith('WGT-'):
|
||||
wgt_objects.append(obj)
|
||||
|
||||
print(f"Found {len(wgt_objects)} WGT objects")
|
||||
|
||||
# Check which ones are actually being used by armatures
|
||||
used_wgts = set()
|
||||
for armature in bpy.data.armatures:
|
||||
for bone in armature.bones:
|
||||
if bone.use_deform and hasattr(bone, 'custom_shape') and bone.custom_shape:
|
||||
used_wgts.add(bone.custom_shape.name)
|
||||
|
||||
print(f"Found {len(used_wgts)} WGT objects actually used by armatures")
|
||||
|
||||
# Remove unused WGT objects
|
||||
removed_wgts = 0
|
||||
for obj in wgt_objects:
|
||||
if obj.name not in used_wgts:
|
||||
try:
|
||||
# Skip linked objects (they're legitimate library content)
|
||||
if hasattr(obj, 'library') and obj.library is not None:
|
||||
print(f" Skipping linked WGT: {obj.name} (from {obj.library.name})")
|
||||
continue
|
||||
|
||||
# Check if it's in the WGTS collection (typical ghost pattern)
|
||||
in_wgts_collection = False
|
||||
for collection in bpy.data.collections:
|
||||
if 'WGTS' in collection.name and obj in collection.objects.values():
|
||||
in_wgts_collection = True
|
||||
break
|
||||
|
||||
if in_wgts_collection:
|
||||
print(f" Removing unused WGT: {obj.name}")
|
||||
bpy.data.objects.remove(obj, do_unlink=True)
|
||||
removed_wgts += 1
|
||||
except Exception as e:
|
||||
print(f" Failed to remove {obj.name}: {e}")
|
||||
|
||||
print(f"Removed {removed_wgts} unused WGT objects")
|
||||
return removed_wgts
|
||||
|
||||
def is_collection_in_scene_hierarchy(collection, scene_collection):
|
||||
"""Recursively check if a collection exists anywhere in the scene collection hierarchy"""
|
||||
if collection == scene_collection:
|
||||
return True
|
||||
|
||||
for child_collection in scene_collection.children:
|
||||
if child_collection == collection:
|
||||
return True
|
||||
if is_collection_in_scene_hierarchy(collection, child_collection):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def clean_empty_collections():
|
||||
"""Remove empty collections that are not linked to scenes"""
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("CLEANING EMPTY COLLECTIONS")
|
||||
print("="*80)
|
||||
|
||||
removed_collections = 0
|
||||
collections_to_remove = []
|
||||
|
||||
for collection in bpy.data.collections:
|
||||
# Check if collection is empty
|
||||
if len(collection.objects) == 0 and len(collection.children) == 0:
|
||||
# Skip linked collections (they're legitimate library content)
|
||||
if hasattr(collection, 'library') and collection.library is not None:
|
||||
print(f" Skipping linked empty collection: {collection.name}")
|
||||
continue
|
||||
|
||||
# Check if it's anywhere in any scene's collection hierarchy
|
||||
linked_to_scene = False
|
||||
for scene in bpy.data.scenes:
|
||||
if is_collection_in_scene_hierarchy(collection, scene.collection):
|
||||
linked_to_scene = True
|
||||
print(f" Preserving empty collection: {collection.name} (in scene '{scene.name}')")
|
||||
break
|
||||
|
||||
if not linked_to_scene:
|
||||
collections_to_remove.append(collection)
|
||||
|
||||
for collection in collections_to_remove:
|
||||
try:
|
||||
print(f" Removing empty collection: {collection.name}")
|
||||
bpy.data.collections.remove(collection)
|
||||
removed_collections += 1
|
||||
except Exception as e:
|
||||
print(f" Failed to remove collection {collection.name}: {e}")
|
||||
|
||||
print(f"Removed {removed_collections} empty collections")
|
||||
return removed_collections
|
||||
|
||||
def is_object_used_by_scene_instance_collections(obj):
|
||||
"""Check if object is in a collection that's being instanced by objects in scenes"""
|
||||
|
||||
# Find all collections that contain this object
|
||||
obj_collections = []
|
||||
for collection in bpy.data.collections:
|
||||
if obj in collection.objects.values():
|
||||
obj_collections.append(collection)
|
||||
|
||||
if not obj_collections:
|
||||
return False
|
||||
|
||||
# Check if any of these collections are being instanced by objects in scenes
|
||||
for collection in obj_collections:
|
||||
# Find objects that instance this collection
|
||||
for other_obj in bpy.data.objects:
|
||||
if (other_obj.instance_type == 'COLLECTION' and
|
||||
other_obj.instance_collection == collection):
|
||||
|
||||
# Check if the instancing object is in any scene
|
||||
for scene in bpy.data.scenes:
|
||||
if other_obj in scene.objects.values():
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def is_object_legitimate_outside_scene(obj):
|
||||
"""Check if an object has legitimate reasons to exist outside scenes"""
|
||||
|
||||
# WGT objects (rig widgets) are legitimate outside scenes
|
||||
if obj.name.startswith('WGT-'):
|
||||
return True
|
||||
|
||||
# Collection instance objects (linked collection references) are legitimate
|
||||
if obj.instance_type == 'COLLECTION' and obj.instance_collection is not None:
|
||||
return True
|
||||
|
||||
# Objects that are being used by instance collections in scenes are legitimate
|
||||
if is_object_used_by_scene_instance_collections(obj):
|
||||
return True
|
||||
|
||||
# Objects used as curve modifiers, constraints targets, etc.
|
||||
# Check if object is used by modifiers on other objects that are in scenes
|
||||
for other_obj in bpy.data.objects:
|
||||
# Check if the other object is in any scene
|
||||
in_scene = False
|
||||
for scene in bpy.data.scenes:
|
||||
if other_obj in scene.objects.values():
|
||||
in_scene = True
|
||||
break
|
||||
|
||||
if in_scene:
|
||||
for modifier in other_obj.modifiers:
|
||||
if hasattr(modifier, 'object') and modifier.object == obj:
|
||||
return True
|
||||
if hasattr(modifier, 'target') and modifier.target == obj:
|
||||
return True
|
||||
|
||||
# Check if object is used by constraints on other objects that are in scenes
|
||||
for other_obj in bpy.data.objects:
|
||||
# Check if the other object is in any scene
|
||||
in_scene = False
|
||||
for scene in bpy.data.scenes:
|
||||
if other_obj in scene.objects.values():
|
||||
in_scene = True
|
||||
break
|
||||
|
||||
if in_scene:
|
||||
for constraint in other_obj.constraints:
|
||||
if hasattr(constraint, 'target') and constraint.target == obj:
|
||||
return True
|
||||
if hasattr(constraint, 'subtarget') and constraint.subtarget == obj.name:
|
||||
return True
|
||||
|
||||
# Check if object is used in particle systems on objects that are in scenes
|
||||
for other_obj in bpy.data.objects:
|
||||
# Check if the other object is in any scene
|
||||
in_scene = False
|
||||
for scene in bpy.data.scenes:
|
||||
if other_obj in scene.objects.values():
|
||||
in_scene = True
|
||||
break
|
||||
|
||||
if in_scene:
|
||||
for modifier in other_obj.modifiers:
|
||||
if modifier.type == 'PARTICLE_SYSTEM':
|
||||
settings = modifier.particle_system.settings
|
||||
if hasattr(settings, 'object') and settings.object == obj:
|
||||
return True
|
||||
if hasattr(settings, 'instance_object') and settings.instance_object == obj:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def clean_object_ghosts(delete_low_priority=False):
|
||||
"""Remove objects that are not in any scene and have no legitimate purpose (potential ghosts)"""
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("OBJECT GHOST CLEANUP")
|
||||
print("="*80)
|
||||
|
||||
# Get all objects, excluding cameras and lights by default (they're often not in scenes for good reasons)
|
||||
candidate_objects = [obj for obj in bpy.data.objects if obj.type not in ['CAMERA', 'LIGHT']]
|
||||
|
||||
if not candidate_objects:
|
||||
print("No candidate objects found")
|
||||
return 0
|
||||
|
||||
print(f"Found {len(candidate_objects)} candidate objects")
|
||||
|
||||
removed_objects = 0
|
||||
ghosts_to_remove = []
|
||||
|
||||
for obj in candidate_objects:
|
||||
# Skip linked objects (they're legitimate library content)
|
||||
if hasattr(obj, 'library') and obj.library is not None:
|
||||
continue
|
||||
|
||||
# Check which scenes contain it
|
||||
in_scenes = []
|
||||
for scene in bpy.data.scenes:
|
||||
if obj in scene.objects.values():
|
||||
in_scenes.append(scene.name)
|
||||
|
||||
# If not in any scene, check if it has legitimate reasons to exist
|
||||
if len(in_scenes) == 0:
|
||||
if is_object_legitimate_outside_scene(obj):
|
||||
print(f" Preserving object: {obj.name} (legitimate use outside scene)")
|
||||
continue
|
||||
|
||||
# If not legitimate, it's a ghost - but be conservative with low user count objects
|
||||
should_remove = False
|
||||
removal_reason = ""
|
||||
|
||||
if obj.users >= 2:
|
||||
# Higher user count ghosts are definitely safe to remove
|
||||
should_remove = True
|
||||
removal_reason = "ghost (users >= 2, no legitimate use found)"
|
||||
elif obj.users < 2 and delete_low_priority:
|
||||
# Low user count ghosts only if user enables the option
|
||||
should_remove = True
|
||||
removal_reason = "low priority ghost (users < 2, no legitimate use found)"
|
||||
elif obj.users < 2:
|
||||
print(f" Skipping low priority object: {obj.name} (users < 2, enable 'Delete Low Priority' to remove)")
|
||||
|
||||
if should_remove:
|
||||
ghosts_to_remove.append(obj)
|
||||
print(f" Marking ghost for removal: {obj.name} (type: {obj.type}) - {removal_reason}")
|
||||
|
||||
# Remove the ghost objects
|
||||
for obj in ghosts_to_remove:
|
||||
try:
|
||||
print(f" Removing object ghost: {obj.name}")
|
||||
bpy.data.objects.remove(obj, do_unlink=True)
|
||||
removed_objects += 1
|
||||
except Exception as e:
|
||||
print(f" Failed to remove object {obj.name}: {e}")
|
||||
|
||||
print(f"Removed {removed_objects} ghost objects")
|
||||
return removed_objects
|
||||
|
||||
def manual_object_analysis():
|
||||
"""Manual analysis of objects - show info but don't auto-remove"""
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("OBJECT GHOST ANALYSIS (MANUAL REVIEW)")
|
||||
print("="*80)
|
||||
|
||||
# Get all objects, excluding cameras and lights (they're often legitimately not in scenes)
|
||||
candidate_objects = [obj for obj in bpy.data.objects if obj.type not in ['CAMERA', 'LIGHT']]
|
||||
|
||||
# Filter to only objects not in scenes for analysis
|
||||
objects_not_in_scenes = []
|
||||
for obj in candidate_objects:
|
||||
# Skip linked objects for analysis
|
||||
if hasattr(obj, 'library') and obj.library is not None:
|
||||
continue
|
||||
|
||||
# Check which scenes contain it
|
||||
in_scenes = []
|
||||
for scene in bpy.data.scenes:
|
||||
if obj in scene.objects.values():
|
||||
in_scenes.append(scene.name)
|
||||
|
||||
if len(in_scenes) == 0:
|
||||
objects_not_in_scenes.append(obj)
|
||||
|
||||
if not objects_not_in_scenes:
|
||||
print("No local objects found outside scenes")
|
||||
return
|
||||
|
||||
print(f"Found {len(objects_not_in_scenes)} local objects not in any scene:")
|
||||
|
||||
for obj in objects_not_in_scenes:
|
||||
print(f"\n Object: {obj.name} (type: {obj.type})")
|
||||
print(f" Users: {obj.users}")
|
||||
print(f" Parent: {obj.parent.name if obj.parent else 'None'}")
|
||||
|
||||
# Check collections
|
||||
in_collections = []
|
||||
for collection in bpy.data.collections:
|
||||
if obj in collection.objects.values():
|
||||
in_collections.append(collection.name)
|
||||
print(f" In collections: {in_collections}")
|
||||
|
||||
# Show recommendation
|
||||
if is_object_legitimate_outside_scene(obj):
|
||||
print(f" -> LEGITIMATE: Has valid use outside scenes")
|
||||
elif obj.users >= 2:
|
||||
print(f" -> GHOST: No legitimate use found, users >= 2 (will be removed)")
|
||||
elif obj.users < 2:
|
||||
print(f" -> LOW PRIORITY: No legitimate use found, users < 2 (needs option enabled)")
|
||||
else:
|
||||
print(f" -> UNCLEAR: Manual review needed")
|
||||
|
||||
def main(delete_low_priority=False):
|
||||
"""Main conservative cleanup function"""
|
||||
|
||||
print("CONSERVATIVE GHOST DATA CLEANUP")
|
||||
print("="*80)
|
||||
print("This script removes:")
|
||||
print("1. Unused local WGT widget objects")
|
||||
print("2. Empty unlinked collections")
|
||||
print("3. Objects not in any scene with no legitimate use")
|
||||
if delete_low_priority:
|
||||
print(" - Including low priority ghosts (no legitimate use, users < 2)")
|
||||
else:
|
||||
print(" - Excluding low priority ghosts (no legitimate use, users < 2)")
|
||||
print("="*80)
|
||||
|
||||
initial_objects = len(list(bpy.data.objects))
|
||||
initial_collections = len(list(bpy.data.collections))
|
||||
|
||||
# Safe operations only
|
||||
wgts_removed = safe_wgt_removal()
|
||||
collections_removed = clean_empty_collections()
|
||||
object_ghosts_removed = clean_object_ghosts(delete_low_priority)
|
||||
|
||||
# Show remaining object analysis
|
||||
manual_object_analysis()
|
||||
|
||||
# Final purge
|
||||
print("\n" + "="*80)
|
||||
print("FINAL SAFE PURGE")
|
||||
print("="*80)
|
||||
|
||||
try:
|
||||
bpy.ops.outliner.orphans_purge(do_local_ids=True, do_linked_ids=True, do_recursive=True)
|
||||
print("Safe purge completed")
|
||||
except:
|
||||
print("Purge had issues")
|
||||
|
||||
final_objects = len(list(bpy.data.objects))
|
||||
final_collections = len(list(bpy.data.collections))
|
||||
|
||||
print(f"\n" + "="*80)
|
||||
print("CONSERVATIVE CLEANUP SUMMARY")
|
||||
print("="*80)
|
||||
print(f"Objects: {initial_objects} -> {final_objects} (removed {initial_objects - final_objects})")
|
||||
print(f"Collections: {initial_collections} -> {final_collections} (removed {collections_removed})")
|
||||
print(f"WGT objects removed: {wgts_removed}")
|
||||
print(f"Object ghosts removed: {object_ghosts_removed}")
|
||||
print("="*80)
|
||||
|
||||
class GhostBuster(bpy.types.Operator):
|
||||
"""Conservative cleanup of ghost data (unused WGT objects, empty collections)"""
|
||||
bl_idname = "bst.ghost_buster"
|
||||
bl_label = "Ghost Buster"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
try:
|
||||
# Get the delete low priority setting from scene properties
|
||||
delete_low_priority = getattr(context.scene, "ghost_buster_delete_low_priority", False)
|
||||
|
||||
# Call the main ghost buster function
|
||||
main(delete_low_priority)
|
||||
self.report({'INFO'}, "Ghost data cleanup completed")
|
||||
return {'FINISHED'}
|
||||
except Exception as e:
|
||||
self.report({'ERROR'}, f"Ghost buster failed: {str(e)}")
|
||||
return {'CANCELLED'}
|
||||
|
||||
class GhostDetector(bpy.types.Operator):
|
||||
"""Detect and analyze ghost data without removing it"""
|
||||
bl_idname = "bst.ghost_detector"
|
||||
bl_label = "Ghost Detector"
|
||||
bl_options = {'REGISTER', 'INTERNAL'}
|
||||
|
||||
# Properties to store analysis data
|
||||
total_wgt_objects: bpy.props.IntProperty(default=0)
|
||||
unused_wgt_objects: bpy.props.IntProperty(default=0)
|
||||
used_wgt_objects: bpy.props.IntProperty(default=0)
|
||||
empty_collections: bpy.props.IntProperty(default=0)
|
||||
ghost_objects: bpy.props.IntProperty(default=0)
|
||||
ghost_potential: bpy.props.IntProperty(default=0)
|
||||
ghost_legitimate: bpy.props.IntProperty(default=0)
|
||||
ghost_low_priority: bpy.props.IntProperty(default=0)
|
||||
wgt_details: bpy.props.StringProperty(default="")
|
||||
collection_details: bpy.props.StringProperty(default="")
|
||||
ghost_details: bpy.props.StringProperty(default="")
|
||||
|
||||
def analyze_ghost_data(self):
|
||||
"""Analyze ghost data similar to ghost_buster functions"""
|
||||
|
||||
# Analyze WGT objects
|
||||
wgt_objects = []
|
||||
for obj in bpy.data.objects:
|
||||
if obj.name.startswith('WGT-'):
|
||||
wgt_objects.append(obj)
|
||||
|
||||
self.total_wgt_objects = len(wgt_objects)
|
||||
|
||||
# Check which WGT objects are used by armatures
|
||||
used_wgts = set()
|
||||
for armature in bpy.data.armatures:
|
||||
for bone in armature.bones:
|
||||
if bone.use_deform and hasattr(bone, 'custom_shape') and bone.custom_shape:
|
||||
used_wgts.add(bone.custom_shape.name)
|
||||
|
||||
self.used_wgt_objects = len(used_wgts)
|
||||
|
||||
# Count unused WGT objects
|
||||
unused_wgts = []
|
||||
wgt_details_list = []
|
||||
for obj in wgt_objects:
|
||||
if obj.name not in used_wgts:
|
||||
# Skip linked objects (they're legitimate library content)
|
||||
if hasattr(obj, 'library') and obj.library is not None:
|
||||
continue
|
||||
|
||||
# Check if it's in the WGTS collection (typical ghost pattern)
|
||||
in_wgts_collection = False
|
||||
for collection in bpy.data.collections:
|
||||
if 'WGTS' in collection.name and obj in collection.objects.values():
|
||||
in_wgts_collection = True
|
||||
break
|
||||
|
||||
if in_wgts_collection:
|
||||
unused_wgts.append(obj)
|
||||
wgt_details_list.append(f"• {obj.name} (in WGTS collection)")
|
||||
|
||||
self.unused_wgt_objects = len(unused_wgts)
|
||||
self.wgt_details = "\n".join(wgt_details_list[:10]) # Limit to first 10
|
||||
if len(unused_wgts) > 10:
|
||||
self.wgt_details += f"\n... and {len(unused_wgts) - 10} more"
|
||||
|
||||
# Analyze empty collections
|
||||
empty_collections = []
|
||||
collection_details_list = []
|
||||
for collection in bpy.data.collections:
|
||||
if len(collection.objects) == 0 and len(collection.children) == 0:
|
||||
# Skip linked collections (they're legitimate library content)
|
||||
if hasattr(collection, 'library') and collection.library is not None:
|
||||
continue
|
||||
|
||||
# Check if it's anywhere in any scene's collection hierarchy
|
||||
linked_to_scene = False
|
||||
for scene in bpy.data.scenes:
|
||||
if is_collection_in_scene_hierarchy(collection, scene.collection):
|
||||
linked_to_scene = True
|
||||
break
|
||||
|
||||
if not linked_to_scene:
|
||||
empty_collections.append(collection)
|
||||
collection_details_list.append(f"• {collection.name}")
|
||||
|
||||
self.empty_collections = len(empty_collections)
|
||||
self.collection_details = "\n".join(collection_details_list[:10]) # Limit to first 10
|
||||
if len(empty_collections) > 10:
|
||||
self.collection_details += f"\n... and {len(empty_collections) - 10} more"
|
||||
|
||||
# Analyze ghost objects (objects not in scenes)
|
||||
candidate_objects = [obj for obj in bpy.data.objects if obj.type not in ['CAMERA', 'LIGHT']]
|
||||
|
||||
potential_ghosts = 0
|
||||
legitimate = 0
|
||||
low_priority = 0
|
||||
ghost_details_list = []
|
||||
|
||||
for obj in candidate_objects:
|
||||
# Skip linked objects (they're legitimate library content)
|
||||
if hasattr(obj, 'library') and obj.library is not None:
|
||||
continue
|
||||
|
||||
# Check which scenes contain it
|
||||
in_scenes = []
|
||||
for scene in bpy.data.scenes:
|
||||
if obj in scene.objects.values():
|
||||
in_scenes.append(scene.name)
|
||||
|
||||
# Only analyze objects not in scenes
|
||||
if len(in_scenes) == 0:
|
||||
# Classify object
|
||||
status = ""
|
||||
if is_object_legitimate_outside_scene(obj):
|
||||
legitimate += 1
|
||||
status = "LEGITIMATE (has valid use outside scenes)"
|
||||
elif obj.users >= 2:
|
||||
potential_ghosts += 1
|
||||
status = "GHOST (no legitimate use found, users >= 2)"
|
||||
elif obj.users < 2:
|
||||
low_priority += 1
|
||||
status = "LOW PRIORITY (no legitimate use found, users < 2)"
|
||||
else:
|
||||
status = "UNCLEAR"
|
||||
|
||||
ghost_details_list.append(f"• {obj.name} ({obj.type}): {status}")
|
||||
|
||||
self.ghost_objects = len([obj for obj in candidate_objects if len([s for s in bpy.data.scenes if obj in s.objects.values()]) == 0 and not (hasattr(obj, 'library') and obj.library is not None)])
|
||||
self.ghost_potential = potential_ghosts
|
||||
self.ghost_legitimate = legitimate
|
||||
self.ghost_low_priority = low_priority
|
||||
self.ghost_details = "\n".join(ghost_details_list[:10]) # Limit to first 10
|
||||
if len(ghost_details_list) > 10:
|
||||
self.ghost_details += f"\n... and {len(ghost_details_list) - 10} more"
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
# Title
|
||||
layout.label(text="Ghost Data Analysis", icon='GHOST_ENABLED')
|
||||
layout.separator()
|
||||
|
||||
# WGT Objects section
|
||||
box = layout.box()
|
||||
box.label(text="WGT Widget Objects", icon='ARMATURE_DATA')
|
||||
col = box.column(align=True)
|
||||
col.label(text=f"Total WGT objects: {self.total_wgt_objects}")
|
||||
col.label(text=f"Used by armatures: {self.used_wgt_objects}", icon='CHECKMARK')
|
||||
if self.unused_wgt_objects > 0:
|
||||
col.label(text=f"Unused (potential ghosts): {self.unused_wgt_objects}", icon='ERROR')
|
||||
if self.wgt_details:
|
||||
box.separator()
|
||||
details_col = box.column(align=True)
|
||||
for line in self.wgt_details.split('\n'):
|
||||
if line.strip():
|
||||
details_col.label(text=line)
|
||||
else:
|
||||
col.label(text="No unused WGT objects found", icon='CHECKMARK')
|
||||
|
||||
# Empty Collections section
|
||||
box = layout.box()
|
||||
box.label(text="Empty Collections", icon='OUTLINER_COLLECTION')
|
||||
col = box.column(align=True)
|
||||
if self.empty_collections > 0:
|
||||
col.label(text=f"Empty unlinked collections: {self.empty_collections}", icon='ERROR')
|
||||
if self.collection_details:
|
||||
box.separator()
|
||||
details_col = box.column(align=True)
|
||||
for line in self.collection_details.split('\n'):
|
||||
if line.strip():
|
||||
details_col.label(text=line)
|
||||
else:
|
||||
col.label(text="No empty unlinked collections found", icon='CHECKMARK')
|
||||
|
||||
# Ghost Objects section
|
||||
box = layout.box()
|
||||
box.label(text="Ghost Objects Analysis", icon='OBJECT_DATA')
|
||||
col = box.column(align=True)
|
||||
col.label(text=f"Objects not in scenes: {self.ghost_objects}")
|
||||
if self.ghost_objects > 0:
|
||||
if self.ghost_potential > 0:
|
||||
col.label(text=f"Ghosts (users >= 2): {self.ghost_potential}", icon='ERROR')
|
||||
if self.ghost_legitimate > 0:
|
||||
col.label(text=f"Legitimate objects: {self.ghost_legitimate}", icon='CHECKMARK')
|
||||
if self.ghost_low_priority > 0:
|
||||
col.label(text=f"Low priority (users < 2): {self.ghost_low_priority}", icon='QUESTION')
|
||||
|
||||
if self.ghost_details:
|
||||
box.separator()
|
||||
details_col = box.column(align=True)
|
||||
for line in self.ghost_details.split('\n'):
|
||||
if line.strip():
|
||||
details_col.label(text=line)
|
||||
else:
|
||||
col.label(text="No ghost objects found", icon='CHECKMARK')
|
||||
|
||||
# Summary
|
||||
layout.separator()
|
||||
summary_box = layout.box()
|
||||
summary_box.label(text="Summary", icon='INFO')
|
||||
total_issues = self.unused_wgt_objects + self.empty_collections + self.ghost_potential
|
||||
if total_issues > 0:
|
||||
summary_box.label(text=f"Found {total_issues} ghost data issues that will be removed", icon='ERROR')
|
||||
if self.ghost_low_priority > 0:
|
||||
summary_box.label(text=f"+ {self.ghost_low_priority} low priority issues (optional)", icon='QUESTION')
|
||||
summary_box.label(text="Use Ghost Buster to clean up safely")
|
||||
else:
|
||||
summary_box.label(text="No ghost data issues detected!", icon='CHECKMARK')
|
||||
if self.ghost_low_priority > 0:
|
||||
summary_box.label(text=f"({self.ghost_low_priority} low priority issues available)", icon='INFO')
|
||||
|
||||
def execute(self, context):
|
||||
return {'FINISHED'}
|
||||
|
||||
def invoke(self, context, event):
|
||||
# Analyze the ghost data before showing the dialog
|
||||
self.analyze_ghost_data()
|
||||
return context.window_manager.invoke_popup(self, width=500)
|
||||
|
||||
class ResyncEnforce(bpy.types.Operator):
|
||||
"""Resync Enforce: Fix broken library override hierarchies by rebuilding from linked references"""
|
||||
bl_idname = "bst.resync_enforce"
|
||||
bl_label = "Resync Enforce"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
@classmethod
|
||||
def poll(cls, context):
|
||||
# Only available if there are selected objects
|
||||
return context.selected_objects
|
||||
|
||||
def execute(self, context):
|
||||
# Get selected objects
|
||||
selected_objects = context.selected_objects.copy()
|
||||
|
||||
if not selected_objects:
|
||||
self.report({'WARNING'}, "No objects selected for resync enforce")
|
||||
return {'CANCELLED'}
|
||||
|
||||
# Count library override objects
|
||||
override_objects = []
|
||||
for obj in selected_objects:
|
||||
if obj.override_library:
|
||||
override_objects.append(obj)
|
||||
|
||||
if not override_objects:
|
||||
self.report({'WARNING'}, "No library override objects found in selection")
|
||||
return {'CANCELLED'}
|
||||
|
||||
try:
|
||||
# Store the current selection
|
||||
original_selection = set(context.selected_objects)
|
||||
|
||||
# Select only the override objects
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
for obj in override_objects:
|
||||
obj.select_set(True)
|
||||
|
||||
# Call Blender's resync enforce operation
|
||||
result = bpy.ops.object.library_override_operation(
|
||||
'INVOKE_DEFAULT',
|
||||
type='OVERRIDE_LIBRARY_RESYNC_HIERARCHY_ENFORCE',
|
||||
selection_set='SELECTED'
|
||||
)
|
||||
|
||||
if result == {'FINISHED'}:
|
||||
self.report({'INFO'}, f"Resync enforce completed on {len(override_objects)} override objects")
|
||||
return_code = {'FINISHED'}
|
||||
else:
|
||||
self.report({'WARNING'}, "Resync enforce operation was cancelled or failed")
|
||||
return_code = {'CANCELLED'}
|
||||
|
||||
# Restore original selection
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
for obj in original_selection:
|
||||
if obj.name in bpy.data.objects: # Check if object still exists
|
||||
obj.select_set(True)
|
||||
|
||||
return return_code
|
||||
|
||||
except Exception as e:
|
||||
self.report({'ERROR'}, f"Resync enforce failed: {str(e)}")
|
||||
return {'CANCELLED'}
|
||||
|
||||
# Note: main() is called by the operator, not automatically
|
||||
|
||||
# List of classes to register
|
||||
classes = (
|
||||
GhostBuster,
|
||||
GhostDetector,
|
||||
ResyncEnforce,
|
||||
)
|
||||
|
||||
def register():
|
||||
for cls in classes:
|
||||
bpy.utils.register_class(cls)
|
||||
|
||||
def unregister():
|
||||
for cls in reversed(classes):
|
||||
try:
|
||||
bpy.utils.unregister_class(cls)
|
||||
except RuntimeError:
|
||||
pass
|
||||
+63
@@ -0,0 +1,63 @@
|
||||
import bpy
|
||||
|
||||
class RemoveCustomSplitNormals(bpy.types.Operator):
|
||||
"""Remove custom split normals and apply smooth shading to all accessible mesh objects"""
|
||||
bl_idname = "bst.remove_custom_split_normals"
|
||||
bl_label = "Remove Custom Split Normals"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
only_selected: bpy.props.BoolProperty(
|
||||
name="Only Selected Objects",
|
||||
description="Apply only to selected objects",
|
||||
default=True
|
||||
)
|
||||
|
||||
def execute(self, context):
|
||||
# Store the current context
|
||||
original_active = context.active_object
|
||||
original_selected = context.selected_objects.copy()
|
||||
original_mode = context.mode
|
||||
|
||||
# Get object names that are in the current view layer
|
||||
view_layer_object_names = set(context.view_layer.objects.keys())
|
||||
|
||||
# Choose objects based on the property
|
||||
if self.only_selected:
|
||||
objects = [obj for obj in context.selected_objects if obj.type == 'MESH' and obj.name in view_layer_object_names]
|
||||
else:
|
||||
objects = [obj for obj in bpy.data.objects if obj.type == 'MESH' and obj.name in view_layer_object_names]
|
||||
|
||||
processed_count = 0
|
||||
for obj in objects:
|
||||
mesh = obj.data
|
||||
if mesh.has_custom_normals:
|
||||
# Select and make active
|
||||
obj.select_set(True)
|
||||
context.view_layer.objects.active = obj
|
||||
bpy.ops.object.mode_set(mode='EDIT')
|
||||
bpy.ops.mesh.customdata_custom_splitnormals_clear()
|
||||
bpy.ops.object.mode_set(mode='OBJECT')
|
||||
bpy.ops.object.shade_smooth()
|
||||
obj.select_set(False)
|
||||
processed_count += 1
|
||||
self.report({'INFO'}, f"Removed custom split normals and applied smooth shading to: {obj.name}")
|
||||
|
||||
# Restore original selection and active object
|
||||
context.view_layer.objects.active = original_active
|
||||
for obj in original_selected:
|
||||
if obj.name in view_layer_object_names:
|
||||
obj.select_set(True)
|
||||
|
||||
self.report({'INFO'}, f"Done: custom split normals removed and smooth shading applied to {'selected' if self.only_selected else 'all'} mesh objects. ({processed_count} processed)")
|
||||
return {'FINISHED'}
|
||||
|
||||
# Registration
|
||||
def register():
|
||||
bpy.utils.register_class(MESH_OT_RemoveCustomSplitNormals)
|
||||
|
||||
def unregister():
|
||||
bpy.utils.unregister_class(MESH_OT_RemoveCustomSplitNormals)
|
||||
|
||||
# Only run if this script is run directly
|
||||
if __name__ == "__main__":
|
||||
register()
|
||||
+57
@@ -0,0 +1,57 @@
|
||||
import bpy
|
||||
|
||||
class RemoveUnusedMaterialSlots(bpy.types.Operator):
|
||||
"""Remove unused material slots from all mesh objects"""
|
||||
bl_idname = "bst.remove_unused_material_slots"
|
||||
bl_label = "Remove Unused Material Slots"
|
||||
bl_description = "Remove unused material slots from all mesh objects in the scene"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def execute(self, context):
|
||||
processed_objects = 0
|
||||
|
||||
# Store original active object and selection
|
||||
original_active = context.view_layer.objects.active
|
||||
original_selection = [obj for obj in context.selected_objects]
|
||||
|
||||
try:
|
||||
# Remove unused material slots from all mesh objects
|
||||
for obj in bpy.data.objects:
|
||||
if obj.type == 'MESH' and obj.material_slots and obj.library is None:
|
||||
# Temporarily ensure object is in view layer by linking to master collection
|
||||
was_linked = False
|
||||
if obj.name not in context.view_layer.objects:
|
||||
context.scene.collection.objects.link(obj)
|
||||
was_linked = True
|
||||
|
||||
# Store original selection state
|
||||
original_obj_selection = obj.select_get()
|
||||
|
||||
# Select the object and make it active
|
||||
obj.select_set(True)
|
||||
context.view_layer.objects.active = obj
|
||||
|
||||
# Remove unused material slots
|
||||
bpy.ops.object.material_slot_remove_unused()
|
||||
processed_objects += 1
|
||||
|
||||
# Restore original selection state
|
||||
obj.select_set(original_obj_selection)
|
||||
|
||||
# Unlink if we linked it
|
||||
if was_linked:
|
||||
context.scene.collection.objects.unlink(obj)
|
||||
|
||||
finally:
|
||||
# Restore original active object and selection
|
||||
context.view_layer.objects.active = original_active
|
||||
# Clear all selections first
|
||||
for obj in context.selected_objects:
|
||||
obj.select_set(False)
|
||||
# Restore original selection
|
||||
for obj in original_selection:
|
||||
if obj.name in context.view_layer.objects:
|
||||
obj.select_set(True)
|
||||
|
||||
self.report({'INFO'}, f"Removed unused material slots from {processed_objects} mesh objects")
|
||||
return {'FINISHED'}
|
||||
+100
@@ -0,0 +1,100 @@
|
||||
import bpy
|
||||
|
||||
def find_node_distance_to_basecolor(node, visited=None):
|
||||
"""Find the shortest path distance from a node to any Base Color input"""
|
||||
if visited is None:
|
||||
visited = set()
|
||||
|
||||
if node in visited:
|
||||
return float('inf')
|
||||
|
||||
visited.add(node)
|
||||
|
||||
# If this is a Principled BSDF node, check if it has a Base Color input
|
||||
if node.type == 'BSDF_PRINCIPLED':
|
||||
for input in node.inputs:
|
||||
if input.name == 'Base Color':
|
||||
# If this input is connected, return 0 (we found our target)
|
||||
if input.links:
|
||||
return 0
|
||||
return float('inf')
|
||||
|
||||
# Check all outputs of this node
|
||||
min_distance = float('inf')
|
||||
for output in node.outputs:
|
||||
for link in output.links:
|
||||
# Recursively check connected nodes
|
||||
distance = find_node_distance_to_basecolor(link.to_node, visited.copy())
|
||||
if distance is not None and distance < min_distance:
|
||||
min_distance = distance + 1
|
||||
|
||||
return min_distance if min_distance != float('inf') else None
|
||||
|
||||
def find_connected_basecolor_texture(node_tree):
|
||||
"""Find any image texture directly connected to a Base Color input"""
|
||||
for node in node_tree.nodes:
|
||||
if node.type == 'BSDF_PRINCIPLED':
|
||||
base_color_input = node.inputs.get('Base Color')
|
||||
if base_color_input and base_color_input.links:
|
||||
# Get the node connected to Base Color
|
||||
connected_node = base_color_input.links[0].from_node
|
||||
# If it's an image texture, return it
|
||||
if connected_node.type == 'TEX_IMAGE' and connected_node.image:
|
||||
return connected_node
|
||||
return None
|
||||
|
||||
def select_diffuse_nodes():
|
||||
# Get all materials in the blend file
|
||||
materials = bpy.data.materials
|
||||
|
||||
# Counter for found nodes
|
||||
found_nodes = 0
|
||||
|
||||
# Keywords to look for in image names (case insensitive)
|
||||
keywords = ['diffuse', 'basecolor', 'base_color', 'albedo', 'color']
|
||||
|
||||
# Iterate through all materials
|
||||
for material in materials:
|
||||
# Skip materials without node trees
|
||||
if not material.use_nodes:
|
||||
continue
|
||||
|
||||
node_tree = material.node_tree
|
||||
|
||||
# First, try to find any image texture connected to Base Color
|
||||
base_color_texture = find_connected_basecolor_texture(node_tree)
|
||||
if base_color_texture:
|
||||
node_tree.nodes.active = base_color_texture
|
||||
base_color_texture.select = True
|
||||
found_nodes += 1
|
||||
print(f"Selected Base Color connected texture '{base_color_texture.image.name}' in material: {material.name}")
|
||||
continue
|
||||
|
||||
# If no direct connection found, fall back to name-based search
|
||||
matching_nodes = []
|
||||
for node in node_tree.nodes:
|
||||
if node.type == 'TEX_IMAGE' and node.image:
|
||||
# Check if the image name contains any of our keywords
|
||||
image_name = node.image.name.lower()
|
||||
if any(keyword in image_name for keyword in keywords):
|
||||
# Calculate distance to Base Color input
|
||||
distance = find_node_distance_to_basecolor(node)
|
||||
if distance is not None:
|
||||
matching_nodes.append((node, distance))
|
||||
|
||||
# If we found any matching nodes, select the one with the shortest distance
|
||||
if matching_nodes:
|
||||
# Sort by distance (closest to Base Color first)
|
||||
matching_nodes.sort(key=lambda x: x[1])
|
||||
selected_node = matching_nodes[0][0]
|
||||
|
||||
node_tree.nodes.active = selected_node
|
||||
selected_node.select = True
|
||||
found_nodes += 1
|
||||
print(f"Selected named texture '{selected_node.image.name}' in material: {material.name} (distance to Base Color: {matching_nodes[0][1]})")
|
||||
|
||||
print(f"\nTotal texture nodes selected: {found_nodes}")
|
||||
|
||||
# Only run if this script is run directly
|
||||
if __name__ == "__main__":
|
||||
select_diffuse_nodes()
|
||||
+100
@@ -0,0 +1,100 @@
|
||||
import bpy
|
||||
|
||||
class SpawnSceneStructure(bpy.types.Operator):
|
||||
"""Create a standard scene collection structure: Env, Animation, Lgt with subcollections"""
|
||||
bl_idname = "bst.spawn_scene_structure"
|
||||
bl_label = "Spawn Scene Structure"
|
||||
bl_options = {'REGISTER', 'UNDO'}
|
||||
|
||||
def find_layer_collection(self, layer_collection, collection_name):
|
||||
"""Recursively find a layer collection by name"""
|
||||
if layer_collection.collection.name == collection_name:
|
||||
return layer_collection
|
||||
|
||||
for child in layer_collection.children:
|
||||
result = self.find_layer_collection(child, collection_name)
|
||||
if result:
|
||||
return result
|
||||
return None
|
||||
|
||||
def execute(self, context):
|
||||
scene = context.scene
|
||||
scene_collection = scene.collection
|
||||
|
||||
# Define the structure to create
|
||||
structure = {
|
||||
"Env": ["ROOTS", "Dressing"],
|
||||
"Animation": ["Cam", "Char"],
|
||||
"Lgt": []
|
||||
}
|
||||
|
||||
created_collections = []
|
||||
skipped_collections = []
|
||||
|
||||
try:
|
||||
for main_collection_name, subcollections in structure.items():
|
||||
# Check if main collection already exists
|
||||
main_collection = None
|
||||
for existing_collection in scene_collection.children:
|
||||
if existing_collection.name == main_collection_name:
|
||||
main_collection = existing_collection
|
||||
skipped_collections.append(main_collection_name)
|
||||
break
|
||||
|
||||
# Create main collection if it doesn't exist
|
||||
if main_collection is None:
|
||||
main_collection = bpy.data.collections.new(main_collection_name)
|
||||
scene_collection.children.link(main_collection)
|
||||
created_collections.append(main_collection_name)
|
||||
|
||||
# Create subcollections
|
||||
for subcollection_name in subcollections:
|
||||
# Check if subcollection already exists
|
||||
subcollection_exists = False
|
||||
existing_subcollection = None
|
||||
for sub in main_collection.children:
|
||||
if sub.name == subcollection_name:
|
||||
subcollection_exists = True
|
||||
existing_subcollection = sub
|
||||
skipped_collections.append(f"{main_collection_name}/{subcollection_name}")
|
||||
break
|
||||
|
||||
# Create subcollection if it doesn't exist
|
||||
if not subcollection_exists:
|
||||
subcollection = bpy.data.collections.new(subcollection_name)
|
||||
main_collection.children.link(subcollection)
|
||||
created_collections.append(f"{main_collection_name}/{subcollection_name}")
|
||||
|
||||
# Apply special settings to ROOTS collection
|
||||
if subcollection_name == "ROOTS":
|
||||
subcollection.hide_viewport = True # Hide in all viewports
|
||||
# Exclude from view layer
|
||||
view_layer = context.view_layer
|
||||
layer_collection = self.find_layer_collection(view_layer.layer_collection, subcollection_name)
|
||||
if layer_collection:
|
||||
layer_collection.exclude = True
|
||||
else:
|
||||
# Apply settings to existing ROOTS collection if it wasn't properly configured
|
||||
if subcollection_name == "ROOTS" and existing_subcollection:
|
||||
existing_subcollection.hide_viewport = True
|
||||
view_layer = context.view_layer
|
||||
layer_collection = self.find_layer_collection(view_layer.layer_collection, subcollection_name)
|
||||
if layer_collection:
|
||||
layer_collection.exclude = True
|
||||
|
||||
# Report results
|
||||
if created_collections:
|
||||
created_list = ", ".join(created_collections)
|
||||
if skipped_collections:
|
||||
skipped_list = ", ".join(skipped_collections)
|
||||
self.report({'INFO'}, f"Created: {created_list}. Skipped existing: {skipped_list}")
|
||||
else:
|
||||
self.report({'INFO'}, f"Created scene structure: {created_list}")
|
||||
else:
|
||||
self.report({'INFO'}, "Scene structure already exists - no collections created")
|
||||
|
||||
return {'FINISHED'}
|
||||
|
||||
except Exception as e:
|
||||
self.report({'ERROR'}, f"Failed to create scene structure: {str(e)}")
|
||||
return {'CANCELLED'}
|
||||
+1458
File diff suppressed because it is too large
Load Diff
+1639
File diff suppressed because it is too large
Load Diff
+104
@@ -0,0 +1,104 @@
|
||||
import bpy
|
||||
from ..ops.NoSubdiv import NoSubdiv
|
||||
from ..ops.remove_custom_split_normals import RemoveCustomSplitNormals
|
||||
from ..ops.create_ortho_camera import CreateOrthoCamera
|
||||
from ..ops.spawn_scene_structure import SpawnSceneStructure
|
||||
from ..ops.delete_single_keyframe_actions import DeleteSingleKeyframeActions
|
||||
from ..ops.find_material_users import FindMaterialUsers, MATERIAL_USERS_OT_summary_dialog
|
||||
from ..ops.remove_unused_material_slots import RemoveUnusedMaterialSlots
|
||||
from ..ops.convert_relations_to_constraint import ConvertRelationsToConstraint
|
||||
|
||||
class BulkSceneGeneral(bpy.types.Panel):
|
||||
"""Bulk Scene General Panel"""
|
||||
bl_label = "Scene General"
|
||||
bl_idname = "VIEW3D_PT_bulk_scene_general"
|
||||
bl_space_type = 'VIEW_3D'
|
||||
bl_region_type = 'UI'
|
||||
bl_category = 'Edit'
|
||||
bl_parent_id = "VIEW3D_PT_bulk_scene_tools"
|
||||
bl_order = 0 # This will make it appear at the very top of the main panel
|
||||
|
||||
def draw(self, context):
|
||||
layout = self.layout
|
||||
|
||||
# Scene Structure section
|
||||
box = layout.box()
|
||||
box.label(text="Scene Structure")
|
||||
row = box.row()
|
||||
row.scale_y = 1.2
|
||||
row.operator("bst.spawn_scene_structure", text="Spawn Scene Structure", icon='OUTLINER_COLLECTION')
|
||||
|
||||
# Mesh section
|
||||
box = layout.box()
|
||||
box.label(text="Mesh")
|
||||
# Add checkbox for only_selected property
|
||||
row = box.row()
|
||||
row.prop(context.window_manager, "bst_no_subdiv_only_selected", text="Selected Only")
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.no_subdiv", text="No Subdiv", icon='MOD_SUBSURF').only_selected = context.window_manager.bst_no_subdiv_only_selected
|
||||
row.operator("bst.remove_custom_split_normals", text="Remove Custom Split Normals", icon='X').only_selected = context.window_manager.bst_no_subdiv_only_selected
|
||||
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.create_ortho_camera", text="Create Ortho Camera", icon='OUTLINER_DATA_CAMERA')
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.free_gpu", text="Free GPU", icon='MEMORY')
|
||||
|
||||
# Materials section
|
||||
box = layout.box()
|
||||
box.label(text="Materials")
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.remove_unused_material_slots", text="Remove Unused Material Slots", icon='MATERIAL')
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.find_material_users", text="Find Material Users", icon='VIEWZOOM')
|
||||
|
||||
# Animation Data section
|
||||
box = layout.box()
|
||||
box.label(text="Animation Data")
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.delete_single_keyframe_actions", text="Delete Single Keyframe Actions", icon='ANIM_DATA')
|
||||
row = box.row(align=True)
|
||||
row.operator("bst.convert_relations_to_constraint", text="Convert Relations to Constraint", icon_value=405)
|
||||
|
||||
# List of all classes in this module
|
||||
classes = (
|
||||
BulkSceneGeneral,
|
||||
NoSubdiv, # Add NoSubdiv operator class
|
||||
RemoveCustomSplitNormals,
|
||||
CreateOrthoCamera,
|
||||
SpawnSceneStructure,
|
||||
DeleteSingleKeyframeActions,
|
||||
FindMaterialUsers,
|
||||
MATERIAL_USERS_OT_summary_dialog,
|
||||
RemoveUnusedMaterialSlots,
|
||||
ConvertRelationsToConstraint,
|
||||
)
|
||||
|
||||
# Registration
|
||||
def register():
|
||||
for cls in classes:
|
||||
bpy.utils.register_class(cls)
|
||||
# Register the window manager property for the checkbox
|
||||
bpy.types.WindowManager.bst_no_subdiv_only_selected = bpy.props.BoolProperty(
|
||||
name="Selected Only",
|
||||
description="Apply only to selected objects",
|
||||
default=True
|
||||
)
|
||||
# Register temporary material property for Find Material Users operator
|
||||
bpy.types.Scene.bst_temp_material = bpy.props.PointerProperty(
|
||||
name="Temporary Material",
|
||||
description="Temporary material selection for Find Material Users operator",
|
||||
type=bpy.types.Material
|
||||
)
|
||||
|
||||
def unregister():
|
||||
for cls in reversed(classes):
|
||||
try:
|
||||
bpy.utils.unregister_class(cls)
|
||||
except RuntimeError:
|
||||
pass
|
||||
# Unregister the window manager property
|
||||
if hasattr(bpy.types.WindowManager, "bst_no_subdiv_only_selected"):
|
||||
del bpy.types.WindowManager.bst_no_subdiv_only_selected
|
||||
# Unregister temporary material property
|
||||
if hasattr(bpy.types.Scene, "bst_temp_material"):
|
||||
del bpy.types.Scene.bst_temp_material
|
||||
+1033
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,41 @@
|
||||
# Raincloud's Bulk Scene Tools
|
||||
|
||||
A couple Blender tools to help me automate some tedious tasks in scene optimization.
|
||||
|
||||
## Features
|
||||
|
||||
- Bulk Data Remap
|
||||
- Bulk Viewport Display
|
||||
|
||||
Officially supports Blender 4.4.1, but may still work on older versions.
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the addon (zip file)
|
||||
2. In Blender, go to Edit > Preferences > Add-ons
|
||||
3. Click "Install..." and select the downloaded zip file, or click and drag if it allows.
|
||||
4. Ensure addon is enabled.
|
||||
|
||||
## Usage
|
||||
|
||||
1. Open blender file/scene to optimize
|
||||
2. Open side panel > Edit tab > Bulk Scene Tools
|
||||
3. Data remapper: Select data types to remap. Currently supports Images, Materials, and Fonts. Select to exclude data type from remapping.
|
||||
4. View amount of duplicates and use the dropdown menus to select which duplicate groups to exclude from remapping.
|
||||
5. Remap. This action is undo-able!
|
||||
6. If remapping has successfully remapped to your liking, Purge Unused Data so that the Viewport Display function has less materials to calculate, unless you are applying it only to selected objects.
|
||||
7. Recommend activating Solid viewport shading mode so you can see what the Material Viewport function is doing. Change color from Material to Texture if you prefer; the function should find the diffuse texture if one exists.
|
||||
8. Apply material calculation to selected objects if preferred.
|
||||
9. Manually set display color for objects that couldn't be calculated, or weren't calculated to your preference.
|
||||
|
||||
## Workflow for unpacking and organizing all textures
|
||||
|
||||
1. Pack all images (File > external data > pack resources, or BST > Bulk Path Management > Workflow > Pack)
|
||||
2. Rename all image (datablocks) as preferred (can be easily done within the Bulk Operations dropdown, but I also recommend the Simple Renaming extension available from the Blender community)
|
||||
3. Remap all image paths as preferred (Bulk Operations)
|
||||
4. Bulk Path Management > Save All (If selected, will save selected, if none are selected, will save all images in file)
|
||||
5. Remove pack
|
||||
|
||||
## Author
|
||||
|
||||
- **RaincloudTheDragon**
|
||||
Reference in New Issue
Block a user