For those not familiar this is basically crowdsourcing rendering. You contribute compute and get credits for it that you can redeem against the collective render farm later
The reason is simply a weird copyright paradox could form when a work is rendered on someone else's computer outside a company. No one that does media for a living will risk their revenue model on some amateur cloud service.
Now for those lucky few that have a dozen RTX 5090 laying around unused, it is a niche problem most folks wished they had right now. lol =3
An external work done by someone else is implicitly owned by that creator.
i.e. if Rowling wrote some description about a bigoted neglected single mom striking it rich, and someone else actually wrote the work... than ownership is not guaranteed without legal assignment of rights to said work (depends on the location in the world). Thus, a publisher may not pay the muse unless under legal obligation of rights transfer, and having similar works further complicate exactly who owns the copyrights.
When you publish media or games, than the providence of assets/compositions become extremely important in a commercial setting.
I didn't make the rules, but do hire project artists all the time. The paperwork involved ends up ridiculously complicated, and requires specialized lawyers. This is why we avoid "AI" content, cloud based GPU service, and any vendor ignorant enough to force the issue. =3
> An external work done by someone else is implicitly owned by that creator.
Rendering an image is not "creating a work“ in the copyright sense. There may be all kinds of other legal quagmires, implicit contracts etc., but copyright is not one of them.
> someone else actually wrote the work
So you throw in a totally different situation? If the Blender scene has been modelled by someone else, then yes, that someone owns the copyright. But not because I let someone else render the completed scene.
If they are an employee under legal agreements, than a firm retains copyright. Yet if the render node is another companies property, than it is highly advisable to talk with your IP lawyers to figure out the jurisdictional copyright rules in your country.
'Rendering an image is not "creating a work“ in the copyright sense.'
Perhaps your super awesome precedent case against BBC, Sony, and or Disney will end in your favor. It is not 0% chance, but pretty close from what the IP firms have advised.
'If the Blender scene has been modelled by someone else, then yes, that someone owns the copyright'
That is just not true in most places, and including assets/compositions from libraries you "found" does not assign copyrights to use said work. Even if some con artist on the Unity store sold you "rights", there is no guarantee it was theirs to assign.
Please consider talking with an IP lawyer in your jurisdiction, as your ludicrous notions of ownership could end up a liability. lol Best of luck =3
This is of course nonsense, but interestingly enough also a FAQ entry of Sheepit:
> Who owns the copyright of the images generated?
> SheepIt Render Farm does not lay any claim to generated images, it only acts as a tool to provide compute power to owners of the project. The owner of the project will have all rights reserved to them; As long as the claimant is the true rights holder of all assets used, and complies with governed free use laws, as well as potential rights holder permissions.
Otherwise I guess GitHub owns the copyright on all software artifacts build on their CI systems. Obviously not how copyright works, like at all.
"GitHub owns the copyright on all software artifacts build on their CI systems"
Good question, someone should get the IP lawyers to go over the EULA of both git and the login email identity service providers. However, I would assume for most FOSS licensed projects it is likely a moot argument. Microsoft probably wouldn't do anything evil...
Hard to say for sure... One would assume github implicitly agrees to the FOSS project license by hosting the source, and thus is still in compliance when compiling the binary in the CI pipeline.
Blender is cheaper than Maya and Autodesk Nuke... and accessible to every artist.
Thus, it is a very common tool choice behind the scenes (targeting OpenEXR frame sequences and post-render compositing layers), but the color profiles are usually still done with professional video editing software. The OpenEXR format offers a few tricks for cleaning up lighting without re-rendering a scene.
Blenders problem is it is perpetually Beta, and plugin ecosystems can be unreliable in a production setting. The film Flow was a miracle to pull off with a tool with dozens of known broken features every "release".
It is like any other tool, in that most of its features take a lot of time to master. For low-poly game assets, it is totally worth a donation. =3
Autodesk Nuke? That was genuinely painful to read. Setting aside the Freudian slip, you're projecting an unwarranted level of confidence, especially considering the information you're presenting is either outdated or just factually incorrect. I'm left wondering, why is that?
Yes, that is an non-editable transposition typo as per YC... but considering Adobe and Autodesk own just about everything now... it is on my mind a lot.
No, we don't use Nuke as the in-house rudimentary Matte painting solution is adequate for our project needs, and is natively compatible with the other choices we made. Reallusion products have been a mixed experience for mocap, and offer limited export options.
We are currently trying out the iPhone Unreal mocap, FacIt, and Blender:
May want to clean up that potty mouth when offering advice, as finding options that reduce the time commitment is always welcome input. Our project needs are rather basic, and have nothing to do with big-screen film vfx. Best of luck =3
„Typo“. Editing previous responses to guide the discussion is bad manner. But what else to expect… Frankly, I struggle to understand why you feign knowledge you don't possess. Instead of simply admitting a lack of understanding, you double down, revealing an even deeper lack of technical expertise with each post. Even unasked like in your last response. Boggles the mind.
Indeed, I discovered Reallusion was not a good direction early on, but we didn't need to double-down on something like the rokoko solutions either. Even after unproductive discussions with their developers asking for a key feature, it was clear it still wouldn't meet the project goals. The 9 seat studio license cost for FaceIt was inexpensive, has shown reasonable results, and I like supporting that kind of FOSS compatible work.
Did you actually have any insights, observations, or questions?
They make medications for mind "Boggles" I hear. Best of luck =3
Your first paragraph once again goes into technical details without any connection to the topic being discussed. It reads like unprompted technical jargon that lacks context or relevance to the current conversation.
>> Did you actually have any insights, observations, or questions?
Yes, my initial comment serves as a warning to others that your technical insights, observations, and questions are as unreliable as llm hallucinations. Unfortunately for you, there is no medication for willful ignorance. =3
>Unfortunately for you, there is no medication for willful ignorance. =3
Indeed, it has been an unfortunate dialogue with someone focused on insecure abusive posturing, and ignoring invitations for constructive exchange of information. I tried to share a few of our projects use-cases, but obviously don't speak your unique colloquial salty language.
Best of luck, this is no longer a meaningful conversation. Bye =3
From memory you basically only see a blurry thumbnail.
Doubt anyone is using this for serious work that has copyright risk though...it's just people playing with blender from what i can tell, so a more relaxed stance is fine
That only applies to video game level stuff. Most "real" scenes from professional movies won't fit in RAM on a typical desktop, let alone the extremely limited VRAM on a GPU. For example (from 2016, so 10 years out of date): https://www.disneyanimation.com/data-sets/?drawer=/resources...
That's quite surprising to me, as a regular hobbyist Blender user who sees at least 4x speedups on GPU vs CPU. Is it because of VRAM limitations on huge scenes? I assumed that big render farms used rigs with many GPUs with some proprietary technology to share memory between them to be able to fit the massive scenes.
Or are you actually responding directly to what I was saying, and it turns out that on big scenes, even apart from the memory use, CPU rendering is somehow more efficient than GPU rendering? I don't think I'm prepared to accept that claim without a lot more substantiation.
There are many aspects to this. The biggest is indeed memory. Scene complexity if VFX has always eaten hardware advances.
Average frame time used to be 2h throughout my career and it never got smaller, however fast hardware became available (and this includes GPUs).
And because scene complexity was always larger than what fit on GPUs for most stuff this was always the limiting factor in how effective they could be.
Another issue people in these debates ignore is R&D.
The amount of R&D you have to spend to make cutting edge offline rendering work perfectly on the moving target that is GPUs is insane, compared to CPUs.
To give you an idea, look at the list of limitations RenderMan XPU (CPU+GPU) has compared to its CPU counterpart (nowadays called Ris) [1]. If you're A CG supervisor that page is a giant red flag for relying on XPU in your production in any way because any limitation you hit has to be worked around. And the biggest expense in production is human labor.
Pixar has sunk probably dozens of man years of R&D into XPU (I think it was announced at Siggraph 2017). I.e. that's seven years times number of developers.
In a shootout, a year ago (with a scene simple enough to fit on a GPU, mind you), the noise levels of the RenderMan XPU render were marginally better that those of 3Delight, for the same render time.[2] And that's after seven years of R&D and with the aforementioned limitations (none of which apply to 3Delight or to RenderMan Ris but the latter had unacceptable noise levels if you look at the images).
3Delight only supports CPU and their team of just half a dozen people has spend 100% of their time, sinc3 1999, on that. Instead of chasing GPUs.
If you have complex scenes or volumes, 3Delight will beat any GPU renderer (and any CPU renderer anyway). I'm happy to bet on this.
On a side note, my guess is that GPU-related R&D was a factor in what killed Clarisse. They even successfully ported OSL to GPU -- laudable! But briefly after the company went under ...
Maybe their time would have been better spent to make the renderer more robust and faster on CPU instead. Just a guess ofc.
This is the high-end stuff. There are lots of GPU path tracers out there and they're all great as long as your scene complexity doesn't cross a certain threshold. And as long as you do not need OSL support, I think only Evee has this.
I see. Thank you for the detailed response. It sounds to me that GPUs could be a better option if there was significantly more stability and standardization across the board, but we're really just not in that place.
Yes, absolutely. If GPUs where as accessible and stuff as stable and standardized as in CPU land many of the above caveats would not apply.
The memory barrier may still be there but my guy feeling is we would have so much more code running on GPU then that there would have been solutions developed for this, too, long ago.
Raph Linus is currently writing a blog post from the perspective of someone who has been working on GPU-accelerated 2D vector rendering.
It echos some of the issues I touched on above.
Well worth reading when it is released (probably will hit HN front page anyway):
CPU or GPU but obv favours gpu
https://flamenco.blender.org/download/
The reason is simply a weird copyright paradox could form when a work is rendered on someone else's computer outside a company. No one that does media for a living will risk their revenue model on some amateur cloud service.
Now for those lucky few that have a dozen RTX 5090 laying around unused, it is a niche problem most folks wished they had right now. lol =3
Why? Copyright protects creativity, not hardware ownership. J.K. Rowling doesn't print her books in her apartment, either.
i.e. if Rowling wrote some description about a bigoted neglected single mom striking it rich, and someone else actually wrote the work... than ownership is not guaranteed without legal assignment of rights to said work (depends on the location in the world). Thus, a publisher may not pay the muse unless under legal obligation of rights transfer, and having similar works further complicate exactly who owns the copyrights.
When you publish media or games, than the providence of assets/compositions become extremely important in a commercial setting.
I didn't make the rules, but do hire project artists all the time. The paperwork involved ends up ridiculously complicated, and requires specialized lawyers. This is why we avoid "AI" content, cloud based GPU service, and any vendor ignorant enough to force the issue. =3
Rendering an image is not "creating a work“ in the copyright sense. There may be all kinds of other legal quagmires, implicit contracts etc., but copyright is not one of them.
> someone else actually wrote the work
So you throw in a totally different situation? If the Blender scene has been modelled by someone else, then yes, that someone owns the copyright. But not because I let someone else render the completed scene.
'Rendering an image is not "creating a work“ in the copyright sense.'
Perhaps your super awesome precedent case against BBC, Sony, and or Disney will end in your favor. It is not 0% chance, but pretty close from what the IP firms have advised.
'If the Blender scene has been modelled by someone else, then yes, that someone owns the copyright'
That is just not true in most places, and including assets/compositions from libraries you "found" does not assign copyrights to use said work. Even if some con artist on the Unity store sold you "rights", there is no guarantee it was theirs to assign.
Please consider talking with an IP lawyer in your jurisdiction, as your ludicrous notions of ownership could end up a liability. lol Best of luck =3
You have no clue at all. Goodbye!
https://www.youtube.com/watch?v=YhgYMH6n004
You were warned this area gets complicated, there are some scenarios that get truly bizarre. Get some popcorn, and best of luck =3
> Who owns the copyright of the images generated?
> SheepIt Render Farm does not lay any claim to generated images, it only acts as a tool to provide compute power to owners of the project. The owner of the project will have all rights reserved to them; As long as the claimant is the true rights holder of all assets used, and complies with governed free use laws, as well as potential rights holder permissions.
Otherwise I guess GitHub owns the copyright on all software artifacts build on their CI systems. Obviously not how copyright works, like at all.
Good question, someone should get the IP lawyers to go over the EULA of both git and the login email identity service providers. However, I would assume for most FOSS licensed projects it is likely a moot argument. Microsoft probably wouldn't do anything evil...
Best of luck, =)
Best of luck =3
Thus, it is a very common tool choice behind the scenes (targeting OpenEXR frame sequences and post-render compositing layers), but the color profiles are usually still done with professional video editing software. The OpenEXR format offers a few tricks for cleaning up lighting without re-rendering a scene.
Blenders problem is it is perpetually Beta, and plugin ecosystems can be unreliable in a production setting. The film Flow was a miracle to pull off with a tool with dozens of known broken features every "release".
It is like any other tool, in that most of its features take a lot of time to master. For low-poly game assets, it is totally worth a donation. =3
I like Blender in many ways, and wish their development path was more user-workflow regression-tested with actual output =3
No, we don't use Nuke as the in-house rudimentary Matte painting solution is adequate for our project needs, and is natively compatible with the other choices we made. Reallusion products have been a mixed experience for mocap, and offer limited export options.
We are currently trying out the iPhone Unreal mocap, FacIt, and Blender:
https://apps.apple.com/us/app/live-link-face/id1495370836
https://faceit-doc.readthedocs.io/en/latest/
May want to clean up that potty mouth when offering advice, as finding options that reduce the time commitment is always welcome input. Our project needs are rather basic, and have nothing to do with big-screen film vfx. Best of luck =3
Did you actually have any insights, observations, or questions?
They make medications for mind "Boggles" I hear. Best of luck =3
>> Did you actually have any insights, observations, or questions?
Yes, my initial comment serves as a warning to others that your technical insights, observations, and questions are as unreliable as llm hallucinations. Unfortunately for you, there is no medication for willful ignorance. =3
Indeed, it has been an unfortunate dialogue with someone focused on insecure abusive posturing, and ignoring invitations for constructive exchange of information. I tried to share a few of our projects use-cases, but obviously don't speak your unique colloquial salty language.
Best of luck, this is no longer a meaningful conversation. Bye =3
The parts of blender that are security concerns are disabled, so not all of the blender pipeline is available
From memory you basically only see a blurry thumbnail.
Doubt anyone is using this for serious work that has copyright risk though...it's just people playing with blender from what i can tell, so a more relaxed stance is fine
Why obviously?
The short version is: no, it is not.
For this reason render farms for the CGI you see in blockbusters and series nowadays are usually CPU-only, still.
How do I know? I work in that industry.
Or are you actually responding directly to what I was saying, and it turns out that on big scenes, even apart from the memory use, CPU rendering is somehow more efficient than GPU rendering? I don't think I'm prepared to accept that claim without a lot more substantiation.
edit: I found this thread here: https://news.ycombinator.com/item?id=25616372
I guess the conclusion is that it is really about memory use, but at least 4 years ago, there wasn't a total consensus on the topic.
Average frame time used to be 2h throughout my career and it never got smaller, however fast hardware became available (and this includes GPUs).
And because scene complexity was always larger than what fit on GPUs for most stuff this was always the limiting factor in how effective they could be.
Another issue people in these debates ignore is R&D.
The amount of R&D you have to spend to make cutting edge offline rendering work perfectly on the moving target that is GPUs is insane, compared to CPUs.
To give you an idea, look at the list of limitations RenderMan XPU (CPU+GPU) has compared to its CPU counterpart (nowadays called Ris) [1]. If you're A CG supervisor that page is a giant red flag for relying on XPU in your production in any way because any limitation you hit has to be worked around. And the biggest expense in production is human labor.
Pixar has sunk probably dozens of man years of R&D into XPU (I think it was announced at Siggraph 2017). I.e. that's seven years times number of developers.
In a shootout, a year ago (with a scene simple enough to fit on a GPU, mind you), the noise levels of the RenderMan XPU render were marginally better that those of 3Delight, for the same render time.[2] And that's after seven years of R&D and with the aforementioned limitations (none of which apply to 3Delight or to RenderMan Ris but the latter had unacceptable noise levels if you look at the images).
3Delight only supports CPU and their team of just half a dozen people has spend 100% of their time, sinc3 1999, on that. Instead of chasing GPUs.
If you have complex scenes or volumes, 3Delight will beat any GPU renderer (and any CPU renderer anyway). I'm happy to bet on this.
On a side note, my guess is that GPU-related R&D was a factor in what killed Clarisse. They even successfully ported OSL to GPU -- laudable! But briefly after the company went under ...
Maybe their time would have been better spent to make the renderer more robust and faster on CPU instead. Just a guess ofc.
This is the high-end stuff. There are lots of GPU path tracers out there and they're all great as long as your scene complexity doesn't cross a certain threshold. And as long as you do not need OSL support, I think only Evee has this.
[1] https://rmanwiki-26.pixar.com/space/REN26/19661981/XPU+Featu...
[2] https://github.com/tristan-north/renderBench
The memory barrier may still be there but my guy feeling is we would have so much more code running on GPU then that there would have been solutions developed for this, too, long ago.
Raph Linus is currently writing a blog post from the perspective of someone who has been working on GPU-accelerated 2D vector rendering.
It echos some of the issues I touched on above.
Well worth reading when it is released (probably will hit HN front page anyway):
https://github.com/raphlinus/raphlinus.github.io/pull/104