Here we see several things coming together, but the one I want to emphasize is that this is an explicit statement that #Haidra would like to help train models on (potentially) stolen data.
It's again hard to claim that you are ethically neutral middleware when this is a stated and expressed goal.
I'd also like to highlight exactly what "open" might mean in this context, specifically with this paper on the topic, which was just published: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807
Let's not mince words.
It is the creator saying outright that they were trying to solve their problem of being unable to get people to work #ForExposure at a mass scale.
This is why I'm describing what they are doing with language as a "shell game."
#Haidra _cannot_ be ethically neutral middleware when the author's stated purpose in building and running the system is to make it so that people can avoid needing to pay artists for their work.
There are other problems, but start there.
6/
Specifically, I am of the view that if they are accepted then #nivenly will become complicit in the behavior of #AiHordeDotNet.
I _don't_ think that the Nivenly board sees it that way, but as an engineer _I_ would. I could draw a line between "us" and "them" if .net was run by someone who was not a lead maintainer on #Haidra
This would not be as big of deal, except that I feel like there's a bit of a shell game going on when it comes to responsibility, and that's part of my core objection
4/
#nivenly #aihordedotnet #haidra
While I think that the situation with #nivenly is one where a call in is appropriate and a situation where I and others can do a lot of good ( https://hachyderm.io/@hrefna/110896943805743127 ), I think the situation with #haidra is a lot less so.
Even if I knew nothing at all about the situation or about haidra I would be heavily discouraged by their answer.
I think that's a Problem™ and their attitude tells me I can't help once they join.
So I will be voting against them joining. I encourage others to do the same.
My thoughts on this are that the situation with #haidra illustrates a good opportunity to course correct. That they didn't think about these externalities is a problem, but it is also a _fixable_ problem and something that we can likely make sure is always on their radar going forward.
If this interests you then let's talk about what that might look like!
Basically: If you _are_ that person and looking to build that base of support please let me know! Especially if you have done this before. There's a lot of potential here and I'd love to see it realized.
But I'm tired and it's a lot of work to do it right.
If I had the time and the energy I would be doing things with #nivenly properly:
* Paying into it
* Going around and looking for other nivenly members and talking to them about #haidra and #generativeAI
* Talking to non-members and building a base of support for ethical development and trying to get them to join
As it is I have somewhere I would rather be focusing if I had the energy for this: my union.
I hope someone will take up the torch there with nivenly and help them onto a good path.
#nivenly #haidra #generativeAI
Given that we have environmental impact analyses for:
* Bitcoin
* Ethereum
* Cars
* Video Games
* Home pesticide use
Color me dubious that it is "infeasible" to do even a rough order magnitude estimate for #haidra…
Okay, I've put these questions on #haidra and #nivenly together into a slightly more coherent single post on the forums for discussion: https://github.com/nivenly/community/discussions/2#discussioncomment-6647668
If your fundamental claim is: "We just built an API, it is up to others how they use it!"
Then you have:
* Deliberate and direct integrations with a set of highly problematic systems.
* A lot of your advertised use cases are with those problematic systems.
* The main deployment of your API is for said systems.
* Your main use of your system on social media accounts is of said systems.
It is hard to simply wave away your support of the problematic system.
This is in direct reaction to the #haidra comments on the #nivenly github discussion https://github.com/nivenly/community/discussions/2#discussioncomment-6645718 is very much "but we're not training the models or gathering training data, we're innocent".
I'm following @hrefna thoughts about the subject, which are much more eloquently put than mine: https://hachyderm.io/@hrefna/110833561126744694
8. What does #nivenly's involvement wrt its mission of "sustainable governance to open source projects and communities around the globe and supports the maintainers’ independent oversight of their projects" mean for #haidra? What does Haidra hope to get out of this? How about nivenly? (Nivenly Mission Statement).
9. What lessons does nivenly's board take from this announcement and the response? What actions will they take as a result going forward?
6/*
Continuing.
7. I view it as a mistake to think of this as "all or nothing." Are there good intermediary steps that cold be considered by #nivenly and #haidra such that haidra could be grown into a better project over time as a series of targets? For example, associating without making them a "Nivenly project" until some set of criteria are met? Providing some resources but limiting fiscal ones until other criteria or benchmarks are achieved? (ACM 3.2, 3.5)
5/
6. Given the structure of #Haidra there are significant environmental concerns in both the training and execution of models under the #AiHorde. Can benchmarks be set to reduce this environmental impact over time? Would the Haidra project be amenable to treating this as a high priority and to hold themselves accountable to a reasonable schedule here? (ACM 1.1, 1.2, 3.2)
More later as I think of them, time to grab dinner.
4/*
5. While #Haidra asserts that individuals are not identifiable ( https://github.com/Haidra-Org/AI-Horde/blob/main/FAQ.md#can-workers-spy-on-my-prompts-or-generations ) there do not seem to be strong safeguards in place around this that I can ascertain. Would the parties be open to an audit and considering any privacy risks identified here as P0 priorities to fix, even if it degrades #AiHorde as a service or renders it infeasible? Is this something that #nivenly could invest in? (ACM 1.2, 1.6, 1.7, 2.4)
3/
3. Relatedly: What would the members of #nivenly think of codifying a series of ethical principles around the use of generative AI? Would #Haidra and #AiHorde be willing to abide by them? (ACM 1.2, 3.4, 4.1).
4. Currently Haidra appears to be taking the "I'm a sign, not a cop" to the problem of kudos being exchanged for money. Would Haidra and Nivenly be open to reexamining this strategy and determining if either other mechanisms might be more robust or if it can be secured (ACM 1.2, 2.5)
2/
Putting some of my thoughts here with respect to #haidra and #nivenly, which I may formalize later into questions for the discussion:
1. Would Haidra be willing to commit to zero use or advertising of models/workers that are trained on data sourced from copyrighted material that does not include the holder's permission, irrespective of legal fair use qualifiers? (ACM 1.6).
2. Has an analysis been done on the environmental impact of #AiHorde? What would this look like? (ACM 1.1, 1.2)
1/
I am not as much of a hardliner on #generativeAI as many people I know, but I care deeply about data provenance and data stewardship (as you know if you've followed me for any length of time).
Seeing @nivenly's messaging, the (somewhat mealy mouthed, tbh) response from those involved with #haidra, and jumping directly into such a dubious field without a guiding set of ethical principles is deeply concerning and disheartening to me.
It's a group with a lot of promise, but this is not good.