Apologetics that uses condescension / insult

And much of it is and Jesus disagrees with parts of it but Christians are certainly free to make Jesus say whatever they want or need Him to in order to satisfy their a priori doctrinal commitments. Alternatively we could just listen to Him and say to hell with what we think we know. That would require surrendering intellectual control though… instead we would rather devise our own Thomas Jefferson versions…

I’m sensing somewhat of an overreaction to what I said.

1 Like

That’s a legitimate question that got skipped

If the YEC had a textual distinction similar to “you have heard” and “it was written,” they could have a field day :wink:

Even Google puts this at the top of a search result for “you have heard it was said”

When Jesus says, “You have heard it said,” he is referring to the fact that popular interpretation and applications of scripture have been misunderstood

2 Likes

I’m reading the Hebrew. That language has a word for “other” and it is not in the text, nor does the rest of the text justify adding it – doing so changes the meaning of the Hebrew.

If I remember my grad studies right, “You have heard it said” could be used to refer to a scriptural passage that was quoted frequently, especially when the speaker was disagreeing with someone else’s interpretation and/or application of that passage. Given that Christ was actually quoting Old Testament scripture, and further that most of His audience at the time were illiterate and so got their scripture second-hand by someone else reading what was written, I’d say that’s the best meaning.

= - = + = - = † = - = + = - =

I’ve found I can get good answers from ChatGPT by requesting that sources be provided in standard academic format such as a paper for publication or an encyclopedia would give. When I’ve just asked for published sources it’s tried to slip me some total inventions; I think requesting the strict protocol keeps it honest. My suspicion is that it has read far too many supermarket tabloids where such dishonesty is common and so it sees that as acceptable.

It occurs to me to wonder whether it would admit to fabricating sources if asked.

3 Likes

I think it was Bing chat a few weeks back, after the links it gave were bogus, that I asked if it was hallucinating (I thing that’s term for AI making things up) and it said it didn’t want to continue the conversation. :grin:

2 Likes

No that’s not the issue, and it isn’t about honesty/dishonesty either. It is more that LLM generative AI like ChatGPT are not intended for academic research they are intended to generate information based on the prompt. ChatGPT has never claimed to be a effective research tool.

2 Likes

The issue was “hallucination” on the part of ChatGPT. My comment was to point out that if you’re careful you can keep it from just inventing sources, i.e. keep it honest.

Given some of the answers I’ve gotten from ChatGPT I laugh at that one – I’ve gotten responses with totally made-up references and even invented quotes.

1 Like
  • Which raises an important issue, IMO.
    • Wkipedia vs. Youtube videos vs. AI LLMs hardly rank as “academic research tools”. But they are, together with “Google”, more often than not, my starting points. Sure beats the guys down at the bar.
  • I’m sure no one’s noticed, I kinda like source citations, and the drunks I hang out with insist on anonymity.
1 Like

You’re assuming access to intended meaning of the Hebrew based on English lexicons written by Westerners with specific views of hermeneutics and translation and the traditional opinions of English speaking scholars.

No, I’m assuming that the absence of a word indicates that the word ought to also be absent in translation. Inserting “other” into that passage changes the meaning, narrowing it without reason.

Sure, however, I simply took issue with your turn of phrase. “Keep it honest” it implied (to me) that effort had to be taken to prevent it from being dishonest. Hallucinations are not a matter of honesty, most often they are a matter of people using a generative AI for tasks it is not intended for.

Sounds like you are asking the wrong questions. ChatGPT isn’t designed for research it is designed to generate new content in response to a prompt in line with its training material. And its training material contains very little (comparatively) academic-level content because it is nearly all under copyright protection.

If you’re getting made-up references and invented quotes, the problem is not with ChatGPT per se, but more likely with the tasks you are asking it to complete.

No matter what I try, it simply refuses to do my laundry for me.

3 Likes

And I am asserting that people who work in translation would not agree with you, because translation involves inferring the intended meaning of a whole utterance, not swapping out word equivalents. Often, even usually, the best translation of what translators infer as the intended meaning involves “inserting” or “deleting” words because translating meaning is not the same thing as providing a gloss.

2 Likes

Have you tried the plugins? :joy:

4 Likes

Or fill out paperwork for getting a grant. :cry:

3 Likes

This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.