```
$ uptime
10:06:46 up 698 days, 7 min, 0 users, load average: 0.30, 0.24, 0.18
```
Just realized that I've got #Platypush running my cameras on some Raspberry Pi Zeros in my house that haven't been rebooted in two years.
The Platypush process itself has remained up and running in all this time, all while serving tens of camera feed requests per day.
I guess I'll have to restart these machines some time soon, or get stuck with Python 3.7 indefinitely...
Just migrated the #Platypush community page from #Reddit to a new self-hosted #Lemmy instance: https://lemmy.platypush.tech/c/platypush
#Platypush 0.50.1 is finally out! ๐
I started the entities framework refactor in April 2022, hoping that laying the foundations for the new API wouldn't have taken me more than a couple of weeks. It ended up taking more than a year, while rewriting half of the codebase in the process. And the codebase really gained a lot in stability and maturity during the process.
There's still a lot more to come. Eventually all the integrations should communicate through the new entities API. Once everything is an entity that can be wrapped into a UI widget, the refactor of the dashboard engine will come next. And then there's this idea of automatically managing the configuration and the dependencies through the web panel itself, reducing most of the remaining entry barriers. And more integrations are on the backlog - among them, XMPP, torrent-csv and PirateWeather.
I'm also looking for support for i18n and a11y. As it's growing into something bigger than a project that is mostly for myself and a few other enthusiastic geeks, it's time to bring languages other than English and also support those aria tags.
Stay tuned for more news!
A digital audio processing question for the #audio, #math and #physics geeks out there (and, of course, any intersection between the three). I thought that I understood audio synthesizing (and acoustics in general), but this problem is making me question all of my knowledge on the topic.
Supposed that you have two sounds (say, for sake of simplicity, two MIDI notes, C4 and G4). They have their own associated fundamental frequencies f1 and f2.
Supposed that you build two simple sine waves for each of them with numpy or whatever, and let's say that each has 1000 samples.
The question is: how do I combine these two waves to give two different effects, at least to the human ear?
- Effect 1: f1 and f2 are "perceived" as a one single sound, with harmonics ratio of 3/2 in the case above, and the frequency that is perceived as "dominant" is the one with the highest amplitude.
- Effect 2: the sounds associated to f1 and f2 are "perceived" as distinct sounds that just happen to be played simultaneously - like in a chord.
If I take the sum of two resulting sine waves (or, better, the two numpy samples of 1000 items each), and send the resulting wave to the audio device, I get effect 1 - i.e. a fundamental frequency with some harmonics.
In order to achieve effect 2, I have to open two distinct audio streams (read "clients"), and send wave 1 to stream 1 and wave 2 to stream 2.
As I'm currently refactoring (and improving) the audio synthesizer extension of #Platypush, I find the latter solution quite inefficient - you may easily be on a system without Pulseaudio and/or with a limited amount of simultaneous sound outputs. Even in a Pulseaudio case with 32 channels, occupying each channel with a different note if I'm playing some polyphonic stuff is very inefficient.
So I'd like to "stuff" even case 2 (i.e. distinct sine waves played simultaneously) into a case-1-like solution (i.e. massage the sine waves and end up with a sound wave with the combination of them - not one with a new sound with harmonics).
And this made me wonder: from a mathematical and physical point of view, what makes the difference between the two cases? If I pluck two strings on my guitar at the exact same time, I perceive the resulting sound as a combination of two distinct waves each with its own fundamental frequency - not like a single sound with some upper harmonics given by the highest note.
Intuitively, the two sounds combine and make the air molecules "ripple" with a wave that should be (again, intuitively) the sum of the two waves.
So how come when I sum two waves on a computer I only get a single-note sound with harmonics? What makes the difference between the way our ears perceive those two cases? My educated guess says that it may have to be with the phase, but my empirical results tell me that it can't be the difference in phase alone.
#audio #math #physics #platypush
The new #Platypush entities dashboard looks good. It took me months of work, but I'm finally getting to a point where everything can be shown in one place, and both the API and the style of all the entities is consistent. I now have a solid foundation to build features like groups, scenes, dashboards and a UI to create automation routines (so even those who aren't proficient with Python or YAML can build cool things) that look and feel the same across all the integrations.
All the new code is now on the main branch, but I don't feel confident to make a new release yet.
My system has now ~1000 identified entities, and the UI starts to get way too slow with such numbers. I've been optimizing things for the past few days (like removing the loading animation for entities altogether so the browser doesn't have to render 1000 GIFs or CSS animations when the page loads), but things aren't as quick as I'd like yet. It still takes >1 minute for everything to load on my phone.
I suspect that the next bottleneck to optimize is the websocket client - every entity update/refresh triggers a new event on the websocket, and the Vue app starts struggling keeping the data model up-to-date when it receives 1000 events within a couple of seconds. My browser is still there processing stuff long after my Raspberry Pis has pushed all the events on the websocket.
I'm open to consider alternatives, but none of those that have come to my mind lately (server-side event throttling, bundling of multiple events in batches, lazy loading with all the entity groups initially collapsed until the user clicks on them) really satisfies me.
Any web developers out there who have ideas?
I've just improved the loading performance of the new #Platypush entity dashboard by 200% with a simple fix.
Using a font-awesome CSS class instead of an animated GIF for your loading spinner can make a huge difference, if that loading spinner is supposed to be used by 1000 components on a page.
I've been testing the new main branch of #Platypush (with the whole new entity framework exposed both on the backend and the frontend) for a couple of days, and overall I like what I see - the new homepage also looks good on mobile browser and inside of the existing app!
But I feel that the slowness of the new homepage (especially on mobile) when many entities are saved on the system may become a bottleneck.
On my largest installation with all the integrations enabled I've got about 1000 entities (yes, I've got plenty of DIY smart devices, plus Bluetooth beacons sent by all kind of devices tend to pollute the list after a while).
Loading all of them on Firefox on my laptop takes ~10-15 seconds - quite some time, but something that a user would expect if so many records need to be refreshed and rendered. On my phone, that can get up to 1-2 minutes, and the UI can get quite slow in the meantime.
I guess that I'll have to find some clever way of either caching the results on the browser, or set some sane defaults to prevent rendering all the entities at once, while avoiding abusing the click-to-expand pattern.
Eventually this may not be too much of a deal breaker.
First, I don't expect many users to have Zigbee, Z-Wave, Bluetooth, Smartthings, Hue, cameras etc., each with tens of devices, all enabled on the same installation at once.
Second, once I implement views, dashboards and groups the user can build their dashboards only for the subset of entities that they want to work on simultaneously, without having to render thousands of items all at once.
But it'd be interesting to know how this issue has been tackled in other projects that also have to load and render a lot of data data when the webpage is first loaded.
Almost 1000 changed files, 26300 additions, 10000 deletions, and more than a year later, the time has come to finally merge the largest PR I've ever worked on in my life.
I started naming this PR the "Tool album PR": keep your work on hold for too long before releasing to the public, and the public will have increasingly high expectations of your work once you release it.
Designing a framework in #Platypush that uses the same paradigms to model entities of any type (think of Bluetooth speakers, Zigbee lights, Z-Wave sensors, Smartthings/Hue integrations, CPU temperature sensors, smart TVs and buttons, Arduino/ESP machinery, media plugins, cloud instances etc. all sharing the same backend API and taxonomy, frontend building blocks and UI interface) has taken me through a long wild ride, and almost a total (and still in progress) rewrite of the platform.
I'm quite satisfied of the results so far though. The new index page shows everything in one place, like a Google Home, Smartthings or Home Assistant dashboard, but I've added my twists to support the things I like - like smart dynamic grouping, filtering on-the-fly, and a strongly consistent way of naming things coming from different integrations. This will provide me a solid ground to implement entities as flexible widgets that can be imported anywhere. And it also makes it much easier to write reusable event hooks: you should ideally be able to subscribe to `EntityUpdateEvent` events that all look and feel the same regardless of the plugin and the entity type.
There's a lot still on the plate, but instead of keeping this PR open for another year or so I'd rather merge now that things are reasonably stable, get feedback, and build more incrementally from now on.
A lot of wiki documentation (and instructions in blog articles) needs to be updated, but the latest docs at docs.platypush.tech already references the new interface. I probably need to put together a big CHANGELOG entry to document all the breaking changes (although I've tried to keep them to a minimum). There's also the migration to SQLAlchemy 2.0 ticket looming on the horizon.
And then more integrations that need to be migrated to the new framework. Media entities (music and video players, cameras, Chromecasts, multi-room audio plugins etc.) are next on the roadmap, followed by voice assistants and messaging services.
Then there's the support for groups and scenes, the integration with existing groups and scenes (e.g. on the Hue, Smartthings, Zigbee or Z-Wave integration), as well as the creation of smart dashboards and views with custom groupings of entities - ideally I'd like to make dashboards easily configurable with custom entities through the UI itself, while the current process still requires getting the hands dirty with some XML templating.
The PR is now fully merged into the main trunk, but I'm happy if someone could test it out before I package a new stable release.
A teaser of the new #Platypush entities UI.
Because one panel per integration is nice, but having all the integrations in one place, with all of their entities speaking the same language, is even nicer.
Cons: if you have many integrations with many very "active" entities the performance may be a bit slow, since the UI may have to load hundreds of entities while processing several messages per second. But this will probably become less relevant once I add the support for adding entities to custom groups, scenes or dashboards.
I started this PR almost a year ago.
One year and almost 28k LoC changed later, I feel like it's time to wrap things up, spin off the remaining tasks as separate tickets, and prepare a new big release of #Platypush.
The new big release may come with some breaking changes, even though I tried to maximize back-compatibility, and some more major changes may come in the upcoming months. But I like overall how this project is growing.
The new PR brings support for general-purpose entities - you can think of Bluetooth devices, Linode/AWS instances, Zigbee lights, Z-Wave sensors, Wi-Fi switches, media players etc. all being backed by the same consistent and documented relational schema. All available in the same UI and exposing the same API. In the future you can create hooks on `EntityUpdateEvent` on top of the existing per-integration custom events, and all the payloads will have the same base format.
A lot more is on the roadmap - proper support for inter-plugin groups and scenes of entities, possibility to configure plugins, hooks and procedures directly from the UI through something Node-Red inspired, and more integrations will adopt to the new specifications (music and media players, Chromecast, smart TVs, cameras and the long tail of custom sensor plugins are next on the list). Oh, and also an official Docker image and configuration/db backup and sync, so the initial learning curve can be much smoother.
I started this project years ago as an effort to put together all of my hacky #Python scripts for #automation under the same roof. I've had plenty of hesitations along the road - mainly when #HomeAssistant became the de-facto FOSS standard for automation. And while it's hard to compete on my own with all the efforts that go into HASS, I feel like Platypush is getting more and more its own purpose.
While HASS is increasingly becoming focused on being a hub that bring together as many proprietary integrations as possible, Platypush still has a strong culture of supporting self-hosted and DIY solutions as first-class citizens - even though it also supports several major proprietary services like e.g. some of Google's cloud services, Alexa or Philips Hue.
And while HASS is increasingly focusing on shipping its own environment on its own devices, I feel like I've succeeded so far in keeping Platypush platform-agnostic and lightweight - it can still run with almost no overhead on a RPi0, you can even run it on Windows or MacOS if you want, and on anything that comes with a decent Python interpreter.
Keeping this project as general-purpose, platform-agnostic and lightweight as possible has definitely come with its challenges, but I feel like the results are slowly starting to pay off.
#platypush #python #automation #homeassistant
I have been diving deep into the world of #Bluetooth lately while refactoring #Platypush - I basically want it to be able to detect and communicate with as many devices as possible out of the box, including all BT stuff.
And I've been really puzzled by the (often forgotten) world of BLE beacons.
While building the new UI to show the scanned devices, I have noticed TONS of beacons from devices with random MAC addresses, no name, and no known services besides exposure notification and proximity identifier.
Most of them report Apple or Google as manufacturer IDs, but there are many with no reported manufacturer at all - but they still report service UUIDs like 0xfe9f or 0xfef3, which are registered by Google. Interesting findings:
- There's only one Apple device in my house (my wife's work MacBook), no AirTags and all, but there are about 20-25 Apple Bluetooth devices scanned in a single hour.
- The more I leave my app on, the more new devices it detects. Even assuming that there are other devices in my neighbors' apartments, Bluetooth usually doesn't cover distances >10m. So I'd expect to see max ~10-20 devices at some point, taking all the smartphones, laptops, Chromecasts, smart TVs etc. into account, and the number should become stable at some point. Instead, we're talking about ~50-100 devices scanned within 2 hours, and the numbers keep going up the more I leave the process active. So it seems that some devices keeps generating new MAC addresses.
I couldn't find much online when searching for some of those service UUIDs, except that they are used by Apple's and Google's BLE beacon protocols. Of course I have a vague idea of how Google and Apple may be using this technology, but are there more insights on the protocols and what's been exchanged?
And, most importantly, is there a documented way of excluding this beacon spam from my scanner? Filtering on manufacturer doesn't suffice, since many of these devices have no registered manufacturers, and I'd need to have an always up-to-date list of whatever GATT UUIDs they register to be able to reliably exclude them on a service basis.
Platypush's UI has been designed to easily handle ~100 smart devices in a single view, but if Google and Apple flood me with hundreds of spam devices a day, each of them pushing several messages per second, then the performance of the app is badly impacted...
As part of my refactor of #Platypush, I've been planning to publish official #Docker images to the Hub. It sounds like it's a good chance to start taking a look at the #Gitea container registry - and hope that the bandwidth requirements won't kill my server https://blog.alexellis.io/docker-is-deleting-open-source-images/
I've finally managed to iron most of the remaining wrinkles in the core ORM model for #Platypush. What a pain.
I've decided to avoid ORMs if possible in the future, and write my own query templates rather than working around somebody else's workarounds.
#SQLAlchemy is great, they said. It allows you to map your relational model to objects, and do all the mapping magic behind the scenes, they said.
As long as you don't cache the objects that you read - you have to write your own caching layer and keep it synchronized with SQLAlchemy, preferably through events. Everything should preferably happen within a scoped session, or an (ugly and bug-prone) static global session.
As long as you don't access objects with relations outside of a session - you never know what the eager/lazy logic decided to fetch, and, unless you want to deal with tens of pages of documentation, you'd better write your own logic that retrieves and deep copies everything.
As long as you don't access the same objects from different threads - unless you implement your own locking logic, your own algorithm to handle concurrent partial updates, create your own deep copies of the objects before passing them around, create your own JSON serializer/deserializer, and/or create your own producer/consumer architecture to make sure that only one thread processes all the updates.
In other words, an ORM works for you as long as you don't build a complex modern application. If you do, then you'll probably have to write your own ORM around it anyway.
What problem were we trying to solve with ORMs again? And when exactly did we decide that the SQL layer was to ugly and complex to use directly, so we had to wrap it into a big object-oriented model that introduces much more complexity than the original model?
How many energies have we invested over the past decades trying to solve this problem of how to fit a circular peg into a square hole?
Was it worth to make our lives so complicated with ORMs?
It's been more than a decade since I started toying with Hibernate. More than a decade later, and I'm still facing the same issues, whether I work with SQLAlchemy, some Spring-based persistence layer or Django ORM: detached sessions, random constraint violations, lazy/eager evaluation hell, odd autoflush/autocommit policies, and let's not even get started with caching.
Sure, an ORM can work for you. As long as you always use the objects within the same thread. Preferably within the same session, unless you go the extra mile with some session attachment wizardry. And as long as you don't go too wild with recursive relationships, because you never know which ones will be lazy loaded and break your application as soon as you try to access a nested attribute. And don't even try to use the ORM objects as your application model - you'll end up having to pass session objects around or copying objects every time you have to share them across threads.
In other words, an ORM can work for you as long as you stick to the hello world and don't develop a modern complex application.
Every time I've had to work with an ORM in a complex application, I've ALWAYS had somehow to implement my own ORM on top - to manage deep copies, serialization, concurrent access, caching, flattening of hierarchies into records etc.
I've spent most of the past year working on a big refactor for #Platypush to persist all the entities from all the integrations in the same consistent format, on top of a DBMS used "the right way". Most of this time has actually been spent fighting session and caching management errors on SQLAlchemy.
I've really reached the point where I'm asking myself what problem we were even supposed to solve. Hide away the supposed complexity of an underlying DBMS to wrap tables into objects? Sure, having relational tables completely mapped to classes that you can use as the foundation of your application's data model is a compelling argument. But I have NEVER been in a situation where I could use ORM classes as the model of my app as well without having to introduce some extra framework-specific wizardy (in the best case), or almost write my own ORM on top (in the worst one).
Paradoxically, the best time I've had working with an application that used a DBMS (besides the JDBC nostalgia) was when using frameworks like MyBatis - or even in the old times with Perl's DBI. They don't try to hide the relational schema away. It's there in plain sight. You just write the queries and statements that you want to run in an XML file, give each of them a name and lists of parameters, and invoke the statements by name in your code when you need the records. When you run the Mybatis call, you query the database. When you commit, you commit. Otherwise, nothing talks to the database behind your back. After you have received the response object, you can do whatever you like with it.
No hidden session management. No arbitrary column/attribute mapping conventions. No caching waiting to mess up your data. No lazy/eager evaluation policies to understand in order to avoid DetachedInstanceErrors from blowing in your face. No fear of reusing objects outside of sessions. No autoflush/autocommit policies that put you in a position where you never know what's being persisted.
I feel like we've created a whole new layer in order to hide the SQL layer that is way more complex and likely to break than the original layer that we wanted to abstract. A lot of resources have gone into ORM development over the years, tons of projects have had to deal with their oddities, a lot of engineering resources have been wasted trying to build solid software on top of them. Is it maybe the time to admit that we've tried to solve the wrong problem all along? Engineers often call it the "object/relational impedance" problem; to me it sounds like we've been trying to fit a square peg into a circular hole all this time.
"Let's go and refactor a ~100K LoC project to embed a more general-purpose way for all the plugins to model their entities...it shouldn't take that long, right?"
@tyil it's a bit more nuanced in this case. If you buy their box then yes, by default it comes with cloud processing for speech (that's because on-device processing has historically been an issue for them, given that their previous boxes shipped with something as powerful as a RPi3).
But the project itself is open and modular, and it supports multiple backends both for speech-to-text (https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/stt/__init__.py) and text-to-speech (https://github.com/MycroftAI/mycroft-core/blob/e7ddd5125650ff9dfcee849a396fe5ebb3eeebb5/mycroft/tts/tts.py). So if you want to tinker a bit nothing prevents you from using Mycroft to deploy a fully self-hosted solution with on-device processing that uses e.g. Mozilla DeepSpeech for STT and their local Mimic model for TTS.
I like its modular approach because it also makes it possible to cherrypick which modules you want to use in other projects. For example, I've built a TTS integration for #Platypush that uses their on-device Mimic3 engine, but it doesn't require to install a full-blown Mycroft environment.
A question to the (open-source) web developers out there: suppose that you have a monorepo where you have both a #Python #Flask-based web server (under myapp) and a #Vue frontend (under myapp/something/webapp/src).
Running the usual npm commands in the directory of the webapp builds all the static files under dist, as usual.
How do you package and distribute your app in a way that is both user-friendly (the user shouldn't do an npm install / npm run build unless they really want to install from sources), clean for the codebase, and easy to maintain?
I'm asking because this is exactly the situation where #Platypush currently is.
As of now, the frontend's dist directory is part of the repo, so I build it / commit it whenever there's a change on the frontend.
This is nice for the user because just typing a `pip install platypush` does everything, with no extra requirements for node/npm and no extra time spent building the frontend files.
However, it's very ugly for the codebase. I basically have a big folder of uglified JS and static resources stored permanently in my repo, and that has to be rebuilt and committed every time I change the FE (which makes commits way less readable).
I've thought of several other solutions, but I'm not really happy with any of them:
1. `pip install` also does `npm install`. Of course, this would keep the codebase way cleaner, but it means that the user now also needs node+npm (besides the Python dependencies), and the build/install time will take ~1 minute more.
2. Split the FE into a separate project and import it via git submodule. Cleaner, it would also clearly separate the BE from the FE, but the FE still needs to built at some point - so I suppose that, before a new package is released, there should be a pipeline that builds the dist files for the FE, and puts the dist folder into the released Python package. Also, this would only work if the user installs the latest stable version via `pip install`. Installing from the cloned repository would still mean building the FE.
3. Have a pipeline in place that, after every commit to main, also builds the FE files and uploads them somewhere, tagging it either with the release number or the SHA hash of the associated commit. Upon installation, the script will just download and unpack the version of the FE dist bundle associated to the target application version. Neat from a user perspective, easy to scale (and even to wrap in a Dockerfile), but quite expensive in terms of computation (an npm install/build needs to run on every commit, and my webapp is quite large) and storage (my dist folder is ~20 MB, storing a dist archive per commit can easily get into the domain of several GBs).
Is there any solution you prefer? Or maybe a better way of packaging and distributing "mixed" Flask+Vue apps that I haven't considered?
#python #flask #vue #platypush
Great tool in Python to create reposting bots from TT, RSS...
great for onboarding new people to fediverse
#tips #tools #mastoadmin #platypush
I tend to avoid boycotts because I don't believe that, in most of the cases, individual actions solve anything at scale.
But, in the case of #RaspberryPi, I regretfully feel compelled to move towards that stance. Today's spycop case has definitely been the last nail in the coffin - not much for the fact itself, but for their unprofessional reactions: I was really shocked to read words as strong and arguments as weak as those of an enraged teenager coming from the official account of such a respected company. Blocking, mocking and bullying those who disagree or criticize in a civilized way goes against everything our community believes in.
But I'm not dropping RPi just because of this. Unfortunately this wasn't just a random incident. It's been the continuation of a disturbing trend in the way RPi manages their communications. I've seen them mock the makers' community as "people who only build useless gadgets for their homes", implying that they are mostly deaf to our stances because we are not the ones generating a stable revenue stream for them.
This is toxic alpha/technobro arrogance that I would have expected from Steve Ballmer or Elon Musk. Not by someone that built their own success on inclusiveness, fairness, and a vast community of enthusiastic makers.
Therefore I won't buy any more of their products, I'll slowly remove all the references to RPi in the documentation of #Platypush, and stop advising their products. And I'm saying this as someone who has ~20 RPis in his home, has been using their products since the first batch of RPi1 was released, has written a book and countless articles on it. I'm not the first random hater.
Since many people are asking for alternatives in these hours, I'll put together my little personal list of alternative brands (solely based on products that I use or have tried). Take two things into account though:
- Most of the products on this list assume that you may not be scared from installing and configuring a Linux distro, or deal with sparse documentation or buggy software. Unfortunately, it's hard to match the level of "newbie-readiness" that RPi built over the years (thanks mostly to the community of makers and volunteers that they are now deriding).
- When it comes to pricing, again, it's hard to compete with RPi. Even if the RPi4 sells at >$100 nowadays, that's still quite cheaper than the next cheapest board.
Coming to the list:
ARM-based:
- BananaPi (these are personally my favourite RPi alternative products)
- OrangePi (the default software may not be very stable though)
- Odroid (most of their products are actually good enough to run even Android or Windows)
- Asus Tinkerboard (a RPi clone that for me represented one of the best drop-in alternatives to RPi)
- Beagleboard (I'm impressed that they have managed to nail an AI-oriented board with a bunch of cores for <$200)
x86-based:
- DeskMini (a producer of miniPC for all the budgets)
- LattePanda (one of the coolest x86 boards out there)
- Intel TUC (probably the most flexible option, and many other miniPC are based on it)
- Anything by System76 or Tuxedo (amazing products, solid Linux support, but pricey tag)
I'm personally waiting for the moment where RISC-V can really take over, but unfortunately at this stage we've still got some PoC boards and not impressive software support.
And I personally think that we need to build a new user-friendly alternative to the RPi: there's plenty of products out there for many of the geeks among us who know how to get Linux installed the manual way. But not many that either come with a good maintained OS pre-installed, or have something as simple as the RPiOS installer.
I'm sick of adapting the code of my #Platypush zwave module just because the guys at zwavejs2mqtt (or Z-WaveJS UI, or whatever it is called now) think that #HomeAssistant is their only client, and as long as HA is onboard they can introduce all the breaking changes they like.
I've had to adapt against their breaking changes three times in the past year alone, and the latest one is forcing me to completely rewrite the way the module generates and processes events. This is exactly the opposite of how a protocol library should work.
I've reached the point that I'm considering dropping this sh*t for good and writing my own zwave library.
The time that FOSS developers spend fixing something that used to work before, and was broken by a breaking change, is wasted time.
And it enrages me when somebody tells me "couldn't you just check the changelog?". Holy crap, today's software relies of tens (if not hundreds, looking at you JS) of dependencies: are we expected to entirely give up our lives and productive time by checking hundreds of changelogs per week?
Just use your brain and DON'T INTRODUCE FUCKING BREAKING CHANGES!! Unless it's really really required! You never know who uses your software downstream, you never know how many things you're going to break, you shouldn't expect tens or hundreds of downstream developers to be on top of every single change you make and spend days of their free time to adapt to them!
Platypush is built to support multiple versions of multiple dependencies and frameworks, and the policy is to always avoid breaking changes. Same for other projects that I maintain, like mopidy-tidal, which works with multiple versions of the Tidal API. Sure, it takes me more work and sometimes it makes the code uglier, but minimizing the time that END USERS spend debugging, maintaining and fixing issues should ALWAYS be the priority of a well-disciplined developer! I don't give a fuck if a couple of months ago you thought that hexId would have been a good name for an attribute and now you like nodeId more! Don't change your mind and force hundreds to change it with you!