2/ Since users of #CodeGeneration with particular APIs are usually relatively inexperienced in the said APIs, these inaccuracies may have grave consequences to the robustness and reliability of the resulting software.
(How would #CodeLlama fare?)
#codegeneration #codellama #generativeAI #papers #nlp #nlproc #softwaredevelopment
Does anyone have any good #resources on learning #codegeneration?
I feel like everywhere I look for #compiler building resources it's just the trivial parsing and lexing parts
#resources #codegeneration #compiler #programming #programminglanguages
Have been doing some code generation recently and discovered that Java has a limit of 64KiB of byte code for functions. It’s been a while since I’ve seen a 16-bit derived limit in the wild.
#Java #CodeGeneration
Five Reasons Why You Should be Monitoring These Four Artificial Intelligence Cases
The GitHub case looks to be settling - the other three are steaming right along.
I'll link the three cases in the comments.
#Law #LawFedi #AI #LegalTech #Copyright #MachineLearning #Litigation #ImageGeneration #CodeGeneration #ThompsonReuters
#thompsonreuters #codegeneration #imagegeneration #litigation #MachineLearning #Copyright #legaltech #AI #lawfedi #law
10/end
[4] https://www.linkedin.com/posts/benjaminhan_reasoning-gpt-gpt4-activity-7060428182910373888-JnGQ
[5] Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, and Bernhard Schölkopf. 2023. Can Large Language Models Infer Causation from Correlation? http://arxiv.org/abs/2306.05836
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
9/
[2] Antonio Valerio Miceli-Barone, Fazl Barez, Ioannis Konstas, and Shay B. Cohen. 2023. The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python. http://arxiv.org/abs/2305.15507
[3] Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. 2023. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. http://arxiv.org/abs/2305.00050
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
8/
[1] Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan Zhang. 2023. Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners. http://arxiv.org/abs/2305.14825
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
7/
The results? Both #GPT4 and #Alpaca perform worse than BART fine-tuned with MNLI, and not much better than the uniform random baseline (screenshot).
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#gpt4 #alpaca #paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
4/
The same tendency is borne out by another paper focusing on testing code-generating LLMs when function names are *swapped* in the input [2] (screenshot 1). They not only found almost all models failed completely, but also most of them exhibit an “inverse scaling” effect: the larger a model is, the worse it gets (screenshot 2).
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
3/
The end result? LLMs perform much worse on *symbolic* reasoning (screenshot), suggesting it leverages heavily on the semantics of the words involved rather than really understands and follows reasoning patterns.
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
2/
They use a symbolic dataset and a semantic dataset to test models’ abilities on memorization and reasoning (screenshot 1). For each dataset they created a corresponding one in the other modality, e.g., they replace natural language labels for the relations and the entities with abstract symbols to create a symbolic version of a semantic dataset (screenshot 2).
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
1/
When performing reasoning or generating code, do #LLMs really understand what they’re doing, or do they just memorize? Several new results seem to have painted a not-so-rosy picture.
The authors in [1] are interested in testing LLMs on “semantic” vs. “symbolic” reasoning: the former involves reasoning with language-like input, and the latter is reasoning with abstract symbols.
#Paper #NLP #NLProc #CodeGeneration #Causation #CausalReasoning #reasoning #research
#LLMs #paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research
I think this is interesting... 🧐 What do you think about maximizing code generation with ChatGPT and navigating its limitations?
💥 Unleashing the power of ChatGPT in code generation! 💻✨ Discover how to verify its output, break tasks into smaller steps, and provide feedback to enhance accuracy. 🚀💡
Join the conversation and share your thoughts! 🗣️💭
🍃 #ChatGPT #CodeGeneration #LLMs
🌐 Source: https://go.digitalengineer.io/FX
#chatgpt #codegeneration #llms
Huh. #ReplIt just announced that they’ve open sourced their #LLM for code completion under the #CreativeCommons license. That’s a pretty big deal
https://twitter.com/pirroh/status/1653586734641471490
Might be the first model that I actually download and use locally.
#replit #llm #creativecommons #ai #codegeneration
Wow! #GoogleBard is amazing! #CodeGeneration and #Debugging made easy with a few simple commands. #GoogleColab is a great platform to export code and make coding even easier. Big shout out to #Google for making programming accessible to everyone! #CodingLifeMadeEasier #ProgrammingForEveryone http://www.techmeme.com/230421/p7#a230421p7
#googlebard #codegeneration #debugging #googlecolab #google #codinglifemadeeasier #programmingforeveryone
Have learned a lot about pythons abstract syntax trees lately. Wish I would've heard about this earlier, it's so useful for code generation.
If everything goes well, my node editor may "soon" be able to not only generate basic panda3d related scripts but also any kind of python scripts without actually writing a single line of code!
#python #ast #panda3d #codegeneration #foss #opensource
Now available in the #VisualStudioMarketplace #CodeFactory for #VisualStudio is a free open-source tool allowing users to author custom #DotNet #CodeGeneration commands.
https://marketplace.visualstudio.com/items?itemName=CodeFactoryLLC.CFRTVS2022
#visualstudiomarketplace #codefactory #visualstudio #dotnet #codegeneration
Now available in the #VisualStudioMarketplace #CodeFactory for #VisualStudio is a free open-source tool allowing users to author custom #DotNet #CodeGeneration commands.
https://marketplace.visualstudio.com/items?itemName=CodeFactoryLLC.CFRTVS2022
#visualstudiomarketplace #codefactory #visualstudio #dotnet #codegeneration
RT @beppecatanese
From #API specifications to code with OpenAPI: generating client and server source code.
Last 'piece' of the year diving into what I presented at #apidays Paris. The role of #OpenAPI, the power of #CodeGeneration, the way to manage… https://mastodon.nl/@beppecatanese/109551006078616708
#api #apidays #openapi #codegeneration
From #API specifications to code with OpenAPI: generating client and server source code.
Last 'piece' of the year diving into what I presented at #apidays Paris. The role of #OpenAPI, the power of #CodeGeneration, the way to manage SDKs and deliver valuable #DeveloperExperience.
https://medium.com/geekculture/from-api-specifications-to-code-with-openapi-76d13c12203b
#developerexperience #codegeneration #openapi #apidays #api