The one thing that I think the RDF & semantic web folks have really missed the boat on is simple but intelligent authoring tools for contexts and schemas.
Like, I know the @w3c publishes a lot of contexts/prefixes, but I've no idea how. Guides for this always just seem to end at "you can"
Basically at its core I suspect we need to reframe the problem from:
"#JsonLD is a format that allows you to write whatever you want and the other party can read it."
to
"#JsonLD is a format that allows you format your data in the way that the other party _wants_ it" (using published rules that if you don't follow your message may be discarded).
18/18
I think (probably) so, if for nothing else than for interop, but you lose right out of the gate a lot of the flexibility that #JsonLD advocates want everyone to have.
But what you _will_ have is the ability to parse both G1 Michigan and V II snail's contract requests in the same interface.
Mostly. At least until they betray you.
17/17
Because this makes keying cheaper it encourages the use of such.
But.
You are still going to lose a lot of power, flexibility, and features of #JsonLD. It isn't going to play nice with RDF-specific tooling in some cases, and it is going to be harder to work with some specs—if they can even be used.
So then the question becomes: is it worth it? Is what we are getting by going through this process even still valuable?
16/
If you do these things (and probably a few more steps that I'm forgetting, it's after 1 AM here) you end up with a few things that make parsing this _much_ easier.
Basically, you _disallow those parts of JSON-LD that make parsing complicated_
Now:
1. You know that if a context is included that nothing it says will change the other contexts you have. This makes it trivial to cache and reuse
2. We've removed most of the #JsonLD specific conceits, so most JSON tools now work out of the box
14/
The trick with (3) is that as you decide to go in that direction you also lose a lot of the _functionality_ of #JsonLD, which a lot of these specs then depend on for building any sort of interoperability. So now you have a headache dealing with that world as well.
10/
Back to the algorithms.
The #JsonLD algorithms have, uh, let's just call it essentially an unbounded stack memory requirement as well as some pretty ugly computational complexity.
There's also a _lot_ of boilerplate that needs to be parsed each and every time when dealing with an online streaming system. The cost of this adds up relatively quickly.
Oh, and this isn't even getting into the security quagmire.
These are Serious Challenges and should not be dismissed.
8/
I have at most about a few minutes of processing time before people will expect to see these messages and messages come in a variety of formats oh and Commander Michigan can't seem to use the same one twice and…
So now I have not just a blob of data I may or may not be able to extract meaning from, I have a _lot_ of data from a _lot_ of different sources.
So we get to the #JsonLD algorithms.
6/
Note that these both contain _mostly_ the same information but in very different formats.
This is a big part of the problem that #JsonLD is supposed to help with.
So #JsonLD provides a bunch of algorithms that let you figure out what all of these things map to and then compare them against each other.
Basically each party provides a _context_ that shows a mapping of the various keys to IRIs.
So now they become:
ac://AllMind/mercenary/designation/g13…
ac://AllMind/…/callsign/Raven
3/
In which I think out loud for a bit about #JsonLD, it's place, and what compromises need to be made:
First, the vision of JSON-LD is essentially this (and yes I will use entirely #ArmoredCore6 references, this is me talking to myself after all :p ):
A request comes in to Handler Walter (HW) for a mission for C4-621 (Raven) to assault a dam and blow up some generators along with G4 and G5, troops under G1 Michigan, part of a squad called the "Red Guns"
G1 gives you the "lucky" callsign G13
1/
On the other hand this doesn't mean that a good type system is useless (*cough* talking to the javascript programmers here, or at least the ones that write specs *cough*).
Like you can remove a huge amount of the error handling—and thus cyclomatic complexity and thus unit testing—in #JsonLD by just… introducing an IRI type. Even just as an intermediate type that you get rid of in later stages.
With a robust system you can get _very_ advanced. But it doesn't eliminate the need for unit testing
I've been hearing people talk about #jsonld a lot, so I finally Googled it. I must be missing something because I don't see the point. Best I can tell it's just data with links to schema.org to tell you what type of data it is. It's just a less powerful Swagger doc? #webDev #activityPub
There's an ongoing debate about this. #JSONLD and its elusive benefits for the #ActivityPub protocol.
But moreover there's a whole range of issues with AS/AP that were never tackled well, while projects delivered solutions in ways that ever increased tech debt.
#SocialHub, #SocialCG and @fedidevs are where improvements are discussed. And the #FEP:
https://codeberg.org/fediverse/fep
On matrix talk is now about a "minimum AP" and more clarity in the specs..
#jsonld #activitypub #socialhub #socialcg #FEP
W3C’s dark mode style leaves something to be desired.
(As an aside to anyone who knows #jsonld: is framing what I want to extract #activitypub data into a Python dict with predictable keys?)
Learning a bit about #activitypub and #jsonld for fun, and a better understanding of #mastodon .
#activitypub #jsonld #mastodon
This is part of why concepts like #JsonLD Framing ( https://www.w3.org/TR/json-ld11-framing/ ) exist, but these sorts of tools come with substantive performance penalties and increases in complexity.
But we can absolutely 100% address this in something like a #FEP.
I don't even think it would be especially hard.
Something I'm chewing on.
When a protocol supports multiple presentations it is usually because there is a fundamentally different requirement that they are trying to hit.
For example, supporting JSON and protobuf.
Technically #JsonLD supports this same form of varied presentation, but it _also_ supports a functionally infinite number of possible encodings that come out to the exact same thing
If your context includes something like "toot : http://joinmastodon.org/ns#" it could just as easily substitute "foo" for "toot"
Doing some writing on #JsonLD and realizing that JSON-LD is really doing two things.
1. It is providing a mechanism for converting back and forth from RDF.
2. It provides a way of describing a syntactic presentation of the data that is distinct from the semantic interpretation. Technically it provides _several different_ syntatic presentations that lead to the same semantic interpretation.
But for most protocols (1) isn't important and (2) is usually standardized on a single presentation.
Side note: I don't actually think this is strictly worthwhile. To me this is an academic exercise that may yield fruit, but it's also a low-stakes game for me.
But there are people who are true believers. If that's you then a lot of people would love to see serious work in improving the performance and safety of these.