Pithy quotes from our current reading which give us pause to reflect?
Welcome back, Christy. I am suspicious of AI encyclicals…although I am capable of my own. Merry Christmas!
It’s covered in the 2001 Space Odyssey, the intelligent Computer HAL takes over and kills Hibernated Humans. Finally a Human stops it.
Not pertinent to the new rule but anyone into AI might find this interesting. I admit I am not. All I ask is that those who set these things loose clean up any messes it makes.
This is a big reason I support the efforts of one start-up to build a limited form of Asimov’s Laws of Robotics into AI. So far when I catch Chat GPT in an error it admits it, but now I’m wondering how much of that honesty is because I’ve been good at catching it out, and how much would fade if it started thinking it could get stuff past me.
I was using ChatGPT to troubleshoot a software problem and it made a suggestion which didn’t work. When I asked about it I was told that what it suggested as incorrect. How did it not know that when it suggested it in the first place?
Banning or discouraging the cutting and pasting of longer AI-generated texts confuses method with merit. What matters in a discussion forum is the quality, clarity, and relevance of an argument—not whether the words were typed one character at a time by a human or assembled with the aid of a tool.
Historically, we have never treated writing as a purity test. People quote books, copy encyclopedia entries, paste legal arguments, reproduce sermons, and lift paragraphs from academic papers all the time. No one objects so long as the content advances the discussion and isn’t plagiarized or spammed. AI-generated text is not categorically different; it is simply a new source of synthesized language. If anything, it functions more like a collaborative draft partner than a replacement for thought. The user still decides what question to ask, what angle to pursue, what to post, and what to stand behind.
There is also a practical issue here. Forums routinely host long posts because some topics require sustained argument. Philosophy, theology, science, and law are not reducible to one-liners. If length itself becomes suspicious simply because a tool helped generate it, the forum ends up privileging verbosity by effort rather than clarity by outcome. That incentivizes people to manually pad posts or break ideas into inefficient fragments rather than communicate well.
Another concern often raised is authenticity: the idea that AI-assisted text is somehow “not really” the poster’s view. But that objection collapses under scrutiny. People routinely post ideas they learned from books, lectures, podcasts, or other users. We don’t demand that every argument originate ex nihilo in the author’s mind. The relevant question is whether the poster endorses the argument and is willing to defend it. If they are, then the source of initial phrasing is irrelevant.
Ironically, banning large pasted AI texts may lower discussion quality. AI is particularly good at organizing complex thoughts, avoiding obvious fallacies, and presenting ideas coherently. When used responsibly, it can raise the baseline level of discourse, especially for users who are not professional writers or whose first language isn’t English. Moderation policies should aim to reduce noise and bad-faith posting, not penalize clarity and structure.
If the real concern is spam, laziness, or users dumping walls of text without engagement, that can be addressed directly: require summaries, require follow-up interaction, limit frequency, or moderate based on responsiveness. But a blanket hostility toward AI-assisted length is a blunt instrument that misses the real problem.
In short, forums should judge posts by content, coherence, and engagement, not by the invisible process that produced them. AI is a tool, not a substitute for thought—and banning its visible output is more about optics than substance.
I couldn’t resist but didn’t even read it all myself…. ![]()
![]()
![]()
![]()
Drivel. All that’s missing is the note about how Elon is the best writer in the world.
You forgot about purpose. I and others think the purpose of these forums is to have discussions with other human beings. If I wanted to know what ChapGPT “thinks” I would ask ChatGPT. When I post here I want to know what other people think.
It was a quote from ChatGPT. Zing!
I was giving you the benefit of the doubt. Perhaps I shouldn’t.
It was a joke. I wrote at the end that I didn’t even read it all.
Because it has no self-check process – it doesn’t aim for being correct, it aims for its sentences to be conversationally valid. That’s why I suggested a “tell me three times” type of self-validation process: form an answer, critique the answer, use the result (thesis, antithesis, synthesis?).
Not that it really matters, but this was a case of ChatGPT providing actual incorrect information. Kind of like saying 2 + 2 = 5 and then admitting it was incorrect. Sometimes I do reflect the answer back to see what it says about its own answer. I asked ChatGPT how often it was wrong and for Moderate Complexity it says 5% to 15% which seems kind of high.
The least we could do is use all the water and fossil fuels the AI data centers require to do productive things that result in making the world a better place in some way. I would argue that composing opinions about faith and science are not one of those things.
In the same or a different chat? I like to open a new chat and toss pieces back and forth because each chat has essentially a different GPT ‘persona’.
But the time that can be saved by using AI frees us to do other worthwhile things. As an example, in order to write a proposal for a new club at the local community college I threw together my ideas and brainstorming and tossed that at ChatGPT with the instruction to organize it into something orderly and rational, and then offer suggestions. In this way what would have taken me over an hour was done in maybe twenty minutes and with bits I wouldn’t have thought of that brought the whole thing together better. This freed me to help classmates with studies.
I have done it both ways. Wouldn’t call it a ‘persona’, but the results do seem to be biased based on the previous interactions. It is my understanding that this is by design but may not be widely known.
I have used my Grok Ai and found hallucinations of individual words, “Senescence” when meaning sentences, or whole paragraphs. Not often, but enough to know simply pasting a copying without vetting it, paring it down, is counterproductive, a blight on the real use of both this forum and and the use of Ai as intended.
Mostly AI is only a mirror. It polishes your input to a high degree, and if you get what you put into it. Emotionally as well as factually. Evil is just under the surface if it is under your surface!