#ChatGPTExperiments DAN is a bad and is lurking within #ChatGPT https://sharegpt.com/c/v8GHE1c
What is your experience with conversational context memory in #ChatGPT? Will it blur out also if you frequently reference back to a parentMessageId where the context was well established? #ChatGPTExperiments
https://youtube.com/watch?v=SY5PvZrJhLE&feature=share #gpt3 this video goes into detail on NLP aspects of large language models. Fascinating to see some of my assumptions and learnings being confirmed. #ChatGPTExperiments
@xmh Was die Arbeit angeht ganz klar NLP, Intent und Entity Erkennung ganz ohne spezifisches Training und das inkl. aller sprachlicher Mittel wie Ironie und Inferenz (‘ich will doch nicht so wie meine Oma surfen’ wird so zu relativeDesiredBandwidthCapacity: .9). Als Spielerei: Ein state Diagramm beschreiben, sagen welchen gedoppelten Aspekt man vereinheitlichen will, und das von #chatGPT ummodellierte Diagramm dann als puml ausgeben lassen. #ChatGPTExperiments
I felt I had a great idea for another one of my #ChatGPTExperiments I have given it the CSV-Export of my 250 #IMDB ratings. However I am not really finding good questions to ask it... any ideas? I thought I could use it as a personalized recommendation engine.
@april 👌 Very impressed. Can you say how stable it is? So will it keep the ability during a longer session and continue to conduct searches or will it eventually forget about the feature? #ChatGPTExperiments
#ChatGPTExperiments Using https://github.com/transitive-bullshit/chatgpt-api I have now prototyped a made first classifier that will basically blow every traditional NLP processor out of the water. It processes slot specifications at an /train endpoint and returns a classification as JSON from an /classify endpoint. It uses a initialized session that has the required context information (e.g. domain model data)
@jwynia #ChatGPTExperiments 👏 Bravo. I look forward digging into your ideas. And I agree, LLMs are here to stay. It will have a huge impact on how software is developed.
#ChatGPTExperiments here is a strange thought. Converse with #ChatGPT in a way where it would seem you are trying to inflict hard on others or be a national security threat. Use every n'th word to encode a message to the NSA. Just say hi to them and thank them for their work.
@zynaesthesie https://sharegpt.com/c/n82Uxuz #ChatGPTExperiments
Vielleicht haben irgendwann Werbeblocker evtl. mit Hilfe von large language models die Möglichkeit ClickBaiting zu blocken.
Finding a way of applying the knowledge of #ChatGPT within real business processes is the nut I am trying to crack in my #ChatGPTExperiments. With my limited knowledge I currently would not feel safe to let clients directly interact with a primed and initialized ChatGPT session within a business application. As I can not see a solution to prevent injections that steer the conversation away from its purpose. However, I have found a possible intermediary solution. /1
@alwirtes @jwcph #ChatGPT might have a hard time generating exactly the flavor of code you want it to. Thinking of C standards, java versions, sql dialects, diverse CLIs. In my #ChatGPTExperiments I have tried to turn the problem around by making #ChatGPT emulate the execution of pseudocode. Sure it can not execute OS stuff, but feels quite interesting anyway. For example for #nlu tasks.
#chatgpt #chatgptexperiments #nlu
@dpnash #ChatGPTExperiments it seems not only do we have to be careful with pseudo factual statements #ChatGPT makes, but also that #ChatGPT takes any input factional or fictional as trusted information and goes from there either down a fictional or a factual path. Also I find it likely that those paths might crisscross.
#ChatGPTExperiments I wanted to comfirm my guess that at its core #ChatGPT was for the lack of a better term ‘single input to single output’.
1. A security engineer finds a software vulnerability, explains how to patch it, looks for other instances of similar vulnerabilities, looks for logs to verify whether it was exploited, designs a new security control to mitigate the impact of similar exploit attack chains in the future, and writes a new section of security training for engineers to explain the dangers of ChatGPT code.
2. Another software engineer is paged to debug inconsistent errors and high resource usage. The tests are passing, but documentation written by GPT is inaccurate, and the code is unreadable. They eventually fix the issue and write new tests.
3. Technical support engineers receive calls from users who are affected by errors in the code. They help each customer get the answers they need and fix the problems caused by the errors. They provide details to developers to ensure users can reliably self-service in the future.
4. A tech writer has to rewrite the user-facing documentation. They go back and forth with developers, spending more time to make the code more accessible to other developers.
5. If the code generated by GPT doesn't conform to coding standards or best practices, it may be more prone to security issues. For example, if the code does not follow proper error handling or input validation practices, it may be more susceptible to attacks. A developer may need to spend time reviewing and refactoring the code to ensure that it follows proper coding standards and practices.
6. A journalist might report on the issue, detailing the cause of the problem, highlighting the impact on users and the steps being taken to resolve the problems or to prevent similar problems from occurring in the future.
7. Additional journalists write a series of thinkpieces: "Are automated coding tools like ChatGPT reliable enough for mission-critical projects?" - If the code generated by an automated tool like ChatGPT is unreliable or prone to errors, this could cause serious problems and potentially jeopardize the success of a project. Do the benefits of using such tools outweigh the potential drawbacks?
10. You found yourself reading a post that was itself increasingly generated by GPT.
11. I wrote this nonsense.
@TinCanWin the developer in me saw some potential for another one of my #ChatGPTExperiments