Narrator: It did not "just work."
Everyone who has ever worked on a #DistributedSystem is already banging their head on their desk, I know, but we never even got there due to two challenges.
1. There was no settled orchestration layer for #Docker at that time.
2. Old habits die hard. If you do the same terrible patterns _inside of docker_ you haven't changed your basal situation much, and there were discussions on things like "how do we ssh in" and how to update the version of tomcat.
Is it possible to have #python #celery tasks be delivered and processed in (guaranteed) order using #redis ? #distributedsystem #messagequeues
#python #celery #redis #DistributedSystem #messagequeues
I've seen a lot of confusion on Twitter regarding the risk for it going down permanently.
Thing is: #Twitter is a huge #DistributedSystem. Systems like these have a near infinite number of possible states they can be in - some of them well understood and stable but most unknown and varying degrees of broken.
People are sometimes afraid of #p2pβs complexity because itβs a #distributedsystem.
But client-server is a many-to-one relationship, so servers are naturally bottlenecks. Scaling them quickly becomes a distributed system of shared state with complexity equal to or greater than any p2p design.
My favorite distributed systems essay, Fred Hebert's "Queues Don't Fix Overload": https://ferd.ca/queues-don-t-fix-overload.html When working on a #DistributedSystem team, I usually link this a couple of times week. The key ideas:
1. Your system has a bottleneck somewhere.
2. Putting queues in front of the bottleneck means you crash less often, but harder.
3. The bottleneck needs to be able slow down system input. "Back-pressure" and load shedding are your friends!
Find that bottleneck.