These are the rules we agreed on:
1. Understand the Topic: Before using a LLM to generate content, make sure you have a good understanding of the subject matter so you can determine if the LLM generated answer is correct.
2. Review and Edit Before Posting: Always read through LLM-generated content carefully to check for accuracy. Make any necessary corrections before posting.
3. LLM Declaration: Declare that the answer as LLM generated.
These rules can be discussed in this thread and can be changed. They are not "admin-rules" but just something we agreed on as a community. Feel free to suggest changes.
-----------------------
This is how I started the thread:
I just answered a question and thought before I write an article about apache2 mod php vs php-fpm, I can just let my LLM do that, manually verify its output and then copy paste it. I just reviewed the Forum terms and they do not state not to use LLMs to answer posts.
I do understand that from a logical perspective, most users could just ask a LLM themselves and not post in a forum. I also noticed that there is a Ask tux bot in this forum, but its archived and doesn't seem operational atm.
There are linux questions (that I have sometimes) that my LLM or ChatGPT answers incorrectly (as in generates fantasy nonsense), but those are mostly very technical and it takes lots of Linux knowledge to see that they are incorrect or can not be true as well.
By using a LLM I can help a user and save personal time as well. Sounds like win win to me.
@admins, what are the rules on this? Can I save time by using a LLM to answer questions here? I think it is fair, as some users might be unaware of LLMs at this point, or inable to verify the output (the answer I just posted I read before and verified that it is correct), or the user might not have a ChatGPT account and lack the knowledge to install a LLM in his OS thats open source.
1. Understand the Topic: Before using a LLM to generate content, make sure you have a good understanding of the subject matter so you can determine if the LLM generated answer is correct.
2. Review and Edit Before Posting: Always read through LLM-generated content carefully to check for accuracy. Make any necessary corrections before posting.
3. LLM Declaration: Declare that the answer as LLM generated.
These rules can be discussed in this thread and can be changed. They are not "admin-rules" but just something we agreed on as a community. Feel free to suggest changes.
-----------------------
This is how I started the thread:
I just answered a question and thought before I write an article about apache2 mod php vs php-fpm, I can just let my LLM do that, manually verify its output and then copy paste it. I just reviewed the Forum terms and they do not state not to use LLMs to answer posts.
I do understand that from a logical perspective, most users could just ask a LLM themselves and not post in a forum. I also noticed that there is a Ask tux bot in this forum, but its archived and doesn't seem operational atm.
There are linux questions (that I have sometimes) that my LLM or ChatGPT answers incorrectly (as in generates fantasy nonsense), but those are mostly very technical and it takes lots of Linux knowledge to see that they are incorrect or can not be true as well.
By using a LLM I can help a user and save personal time as well. Sounds like win win to me.
@admins, what are the rules on this? Can I save time by using a LLM to answer questions here? I think it is fair, as some users might be unaware of LLMs at this point, or inable to verify the output (the answer I just posted I read before and verified that it is correct), or the user might not have a ChatGPT account and lack the knowledge to install a LLM in his OS thats open source.
Last edited: