Cory Doctorow @pluralistic "on how a poisoned #ML system could be abused in ways that evade detection": https://pluralistic.net/2022/10/21/let-me-summarize/#i-read-the-abstract #LLM #seq2seq #metaBackdoor #machineLearning #ai #backdoors #modelSpinning #dataGovernance @dataGovernance #AIEthics #ethicalAI #retrieval #dataMining #dataDon #infoSec
#InfoSec #datadon #datamining #retrieval #ethicalai #aiethics #datagovernance #modelspinning #backdoors #AI #MachineLearning #metabackdoor #seq2seq #LLM #ml
This is not to say, however, that I think these models are useless. I think the interesting question is how to integrate these models into systems that express a particular meaning, a la data-to-text #NaturalLanguageGeneration. Whether this involves #PromptEngineering, integrating them into the decoder for #seq2seq models, or some other more clever application remains to be seen. I am looking forward to seeing how #LLM/s get used for #NLG going forward.
#naturallanguagegeneration #PromptEngineering #seq2seq #llm #nlg
Around 2015 and 2016 we saw sequence-to-sequence (#seq2seq) models applied to data-to-text #NLG for the first time. These models were trained end-to-end and were very exciting because it raised the prospect of reducing the amount of hand-crafted #GrammarEngineering one would have to do to create a #NaturalLanguageGeneration system.
#seq2seq #nlg #grammarengineering #naturallanguagegeneration