@siderea I think what can be done locally would be a good start to reducing things like "don't show me the same boost more than once".
I'm certain that the extent of what can be done locally with reasonable cache sizes is a meaningful change to what we would see for anyone whose followings have more than a couple hundred posts + boosts in a day.
And the complexity escalates exponentially if we need to carefully manage local cache sizes to support even simple models. Probably the most that's feasible (while still complex) would be thresholds or weighting for "number of posts + boosts to see from each follow" combined with "rank by reaction velocity since posting" βΒ which allows a client to recognize that it needs to request more from the server to fill that queue. But again, you can see how for more than a couple hundred follows, this rapidly hits gigabytes of local caching and requires list preparation ON the server.
For comparison: few would consider Netflix recommendations to be toxic βΒ and yet the list of rows and each horizontal row of movies must be prepared and cached on Netflix servers ready for a client to request it. That's on the order of 10k titles, each with a short list of tags βΒ much less data than Mastodon handles.
Does that give you an idea of the technical challenges?
#MastodonRanking
#AlgorithmDrivenTimelines
#ChronologicalTimelines
#MastodonDesign
#RecommendationEngines
#mastodonranking #algorithmdriventimelines #chronologicaltimelines #mastodondesign #recommendationengines
Welcome ππ¦ refugees to the best timeline.
#ChronologicalTimelines of #mastadon
#AlgorithmFree #Web3Timelines was peak #Web2 #ToxicSocialMedia divestiture #tippingpoints
Be sure to #tip your instances ppl! π»
#chronologicaltimelines #mastadon #algorithmfree #web3timelines #web2 #toxicsocialmedia #tippingpoints #tip