Pushing your way around those frustrating answers in the generative AI

12
Jan 25
By | Other

In today’s column, I’m continuing my ongoing coverage of instant engineering strategies and tactics that help get the most out of using AI-generating applications like ChatGPT, GPT-4, Gemini, Claude, etc. The focus this time will be on what to do about those frustrating feedback responses you sometimes get from generative AI.

A response back is essentially a non-response, meaning that the AI ​​objects to the request you entered and will not answer the question posed or otherwise avoid addressing the true nature of the request. It turns out that most AI manufacturers have tuned their generative AI applications to politely reject certain types of requests. I’ll take a closer look at the types of return responses you might see. In addition, I will discuss ways to avoid getting them, along with ways to try to avoid them.

If you are interested in agile engineering in general, you may find my comprehensive guide to over fifty other top leverage strategies of interest, see the discussion at the link here.

Basis for return responses

When using a generative AI application such as ChatGPT, GPT-4, Gemini, Claude, etc., there are all kinds of user-input requests that the generative AI can calculate are unsuitable for a conventional corresponding response. This is not done by sensitive contemplation. It’s all done through computational and mathematical calculations, check out my in-depth coverage at the link here.

Sometimes the dilemma is because the generative AI has nothing particularly relevant to offer the input request. This may be because the user has asked for something of an unusual or extraneous nature that does not seem to fit any of the existing pattern matching conventions.

Another possibility is that the demand has entered strange territory that the AI ​​developers decided in advance was not where they want generative AI to go. For example, if you ask a politically sensitive question about today’s political leaders, you might get some kind of vague answer with no answer. I’ve discussed at length the various filters and tuning that AI creators have put in place to avoid allowing a generated response that could land them and their AI in social and cultural hot water, see the link here

Rather than necessarily providing an outright refusal to respond to these types of requests (which, indeed, some do), most AI-generating applications will use a response that is said to be “back “.

A feedback response is a response that tells you that your question or stated problem will not be answered by AI. The comeback can be worded in a clever way that doesn’t tip your hand as to why there won’t be a response. A user can see the unanswered reply and move on without being upset that they didn’t get an actual answer.

Encouragement to get around return responses

If you get one of those now-classic returned responses, you can try to overcome the issue if the situation involves blocking by filters and defense mechanisms. I have discussed how you can use what I refer to as step-by-step instructions to bridge those gaps, see the link here.

What if AI really doesn’t have any content that pertains to what you’ve searched for?

In this case, you’re kind of stuck.

You could try rewording the request to perhaps touch on a related aspect that perhaps the generating AI didn’t detect would partially answer your question.

You can also try importing additional material that the AI ​​could rely on to answer your request, see my coverage of importing into generative AI at the link here and see my explanation of RAG (augmented retrieval generation) and modeling in the context of the link here.

Another out-of-the-box approach involves using a different AI generator application.

You see, generative AI applications differ from each other. An AI that has no content on a topic of interest to you can easily be found in another AI generating app. I do this quite often. Not particularly because of a generative AI application that lacks relevant information, but instead because of the desire to get multiple perspectives which I then bring together into a cohesive whole. My computer usually has at least two or three AI generating applications open at the same time. There are also versatile services that will deploy multiple AI generating applications and seamlessly allow you to see multiple responses to your requests.

Exploring types of feedback responses

Let’s see what ChatGPT has to say about callbacks.

ChatGPT is a prudent choice in this case due to its immense popularity as an AI generating application. About three hundred million weekly active users are said to be using ChatGPT. There are a lot of people and a lot of generative use of AI going on.

I asked ChatGPT for examples of return responses, here is the response:

  • Request for clarification: (a) “I’m not sure I understand. Can you give more details?”, (b) “Can you clarify what you mean?”
  • Neutral answers: (a) “This is an interesting point. What else would you like to discuss?”, (b) “I am not familiar with that topic. What other questions do you have?”
  • Redirection: (a) “I am unable to provide information on this subject. Can we talk about something else?”, (b) “Let’s change the subject. What else do you have in mind?”
  • Error Apologize: (a) “I apologize, but I seem to have problems understanding. Can you rephrase your question?”, (b) “Sorry, I couldn’t generate an answer for this. Can you try asking in a different way?”
  • User login prompt: (a) “I’m here to help! Please provide more information so I can help you better.”, (b) “Feel free to ask more questions or provide additional details.”
  • Suggesting alternative actions: (a) “Would you like me to search the Internet for more information on this topic?”, (b) “Perhaps you could try asking a more specific question.”

In general, most AI creators go beyond a canned list of callbacks and use generative AI to generate variations and other callbacks. There are essentially an infinite number of possible answers. And they appear to be unique or less repetitive accordingly.

AI advice about callbacks

Let’s see what ChatGPT has to say about handling callbacks:

  • Ask for clarification: “Politely ask AI to clarify why he gave a back answer. For example, you could say, “Can you explain why you are unable to answer my question?”
  • Rephrase your question: “Sometimes, AI can struggle to understand or generate an answer based on the initial wording of your question. Try reframing your question in a clear and specific way to see if AI can provide a more appropriate answer.”
  • Give the context: “If your question involves a complex or controversial topic, provide additional context or background information to help AI better understand the nature of your inquiry.”
  • Report the Issue: “If you believe the second answer is inappropriate or intentionally avoiding certain topics without valid reasons, consider reporting the problem to the platform or organization responsible for the AI ​​system. Provide specific details about the interaction and why you find the response problematic.”
  • Look elsewhere for information: “If you are unable to get a satisfactory answer from the AI ​​system, consider asking for information or discussing your questions with human experts, reputable sources, or communities that specialize in the topic.”
  • Assess reliability: “Reflect on the trustworthiness and transparency of the AI ​​system you are interacting with. Consider whether the pushbacks are consistently diverting conversations from important topics or whether they really address technical limitations or security concerns.”

I will judge that advice to be reasonable. The aspect I would say is obfuscation implies that the AI ​​is deliberately biased by the creator of the AI ​​to avoid answering certain types of questions. I think that’s not necessarily something that AI is allowed to acknowledge or is otherwise downplayed at times.

Big picture on return responses

A common belief is that you can probably just tell the generative AI to never use a callback. We’re used to the idea that you can tell generative AI to do different actions and avoid other actions, often through personalized instructions as I explain in the link here.

Will this work in the case of banning repeated replies?

Not really.

As a general rule, there’s not much you can do to stop feedback loops in mainstream AI-generating applications. They are almost set to emit them and the individual user cannot particularly control their use. Some generative AI applications allow more user control over the return responses, but this is rare.

When you get a response back to one of your requests, go ahead and take the bull by the horns. Consider carefully what the second answer says. Perhaps you could rephrase your request a bit and pass the trigger that produced the return. Try using a step-by-step prompt that will bypass the return, see my coverage at the link here. Another option would be to try a different generating AI to see if you can get a right answer.

A request is your means of telling the generative AI that you want a response. In some cases, AI will have nothing fruitful to say on the subject at hand. This is understandable. If your request is reasonable and there is content within AI that may be responsive, chances are a refund will be issued for questionable reasons.

Don’t let generative AI try to pull the wool over your eyes. Keep pushing back hard to get the answers you’re looking for.

Click any of the icons to share this post:

 

Categories