As to be read within their statement from October 22, 2024 the FSF is considering the development of software with the focus on machine learning. Within their statement the FSF called out the difference between free and non-free implementations. Hyperbola as project is not recognizing this statement as a step forward within and in our perspective the FSF fails to recognize the tremendous consequences of that technology itself.
Likewise stated already before we cannot call any kind of software "free" when not all parts of it, including also all data, is released free and permissive licensed. The software to be described in usage for "machine learning" is using also per definition non-ethical data. Even when people are before asked if their data could be used for further evaluation and training within such systems, it is never clear what will happen in the end with their data. Privacy is a sensible point and there is a needed delimitation always to be respected: Data should be only used occasion-related, nothing more and nothing less. Data from people is per definition always private and is not to be touched by anyone except the user owning, providing and in fact is related to.
Within their statement the FSF is now trying to create the impression that there is a possible difference in reach. In fact all kind of algorithms residing under the described term "machine learning" cannot and will not be considered to be free and libre, when we respect the rights of users as first and most important. Users should be always able to decide what happens with their data and information, what happens with their systems and software. The FSF is failing to follow this and protect the users rights and also the stance to inform and teach users about their possible alternatives.
Hyperbola declines such shortened stances for sure and therefore also the definition from the FSF, which is in our perspective not only too generalized but also dangerous: When we hand out tools towards approved liars, able therefore to endanger whole parts of the global society and therefore undermine the peoples belief in democratic processes with propaganda, lies and fake imagery, we need to rethink immediately. As project we will never support such further development and also not add questionable software just under the thought of "progress". There is no "progress" when we endanger and threaten minorities worldwide as democracy is exactly therefore: Grant everyone respecting it a voice and a chance. We need more competence to recognize dangers instead, more media literacy instead of more digitalization. When we only think of "digitalization" as part of "progress" instead as part of "inclusion", we fail generic besides we also fail to see that everything within is also and always political. Otherwise trust will be lost, and credibility will soon follow: The FSF does not understand exactly that point as one of the hallmarks of an institution in crisis is that, far from preparing for the future, it is barely capable of managing the present. They have failed to do this in the past and now repeating the exact same scheme, undermining the meaning of altruistic oriented software, data and information! And as the whole community about free, libre software and culture we also need to recognize the problem within lies and fake imagery, within false information. On the one side so many people pretend to be neutral and software-projects need to be the same. But we are in fact NOT neutral: In the current moment when we enter the talk about data, about the problems on the rise, we are NOT neutral. We cannot be. As society we don't want to listen, we don't want to learn and recognize the issues. We just want to make our voices heard and so any means are actually acceptable and welcome: Voices heard based on facts is one fair and clear point, nevertheless voices getting loud only to undermine, demolish and finally abolish democracy is quite different from that - not a fact-based debate: There is no freedom within machine learning applications and there will be never a free machine learning application.
Even with the best intentions we fail as mankind in a whole, when tools are misused to endanger the way forward granting fair living-conditions for every being. And yes, we recognize that tools are released under free, permissive licenses, but the data used to create later results and information surely is not licensed that way and most the time also not clearly stated, which therefore generates further more questions than answers. We have therefore no interest to take part or even be part within a fight who is right or wrong and give the control for information into the hands of authoritarians. We have no interest to see science based iterations and any kind of reasonable debate going down, using our provided system and software. And we also do not hand out lighters towards people like to play with fire! Equality, fairness and justice are elementary parts for a peaceful living and we cannot transfer this towards any algorithm, any program and application. We will never be able to teach software the complexity of social living or solve our social issues with it. We need to solve this on our own, dear FSF. So stop telling illusions towards the people and community.
Further details: