While studying a course, I presented ChatGPT with a scenario accompanied by four corresponding choices to assign a score to each approach. The scores ChatGPT was allowed to assign were respectively: a 0, 1, 3, or 5 points. What ChatGPT did well was explaining the scores for each approach clearly... However, it provided the following distribution: Which, of course, is fundamentally wrong at a high level. I pointed this out and made it clear to ChatGPT, and it claimed to understand... Next scenario... ChatGPT then came back with this result: You see, the entire structure of 0, 1, 3, or 5 points per question is completely gone. Not only that, but it even invented its own scoring system based on something I never asked for. So, what’s the moral of the story? AI is fun... it’s a tool, and sometimes it has already helped me solve problems in test automation. In fact, I once asked a question where the answer made it feel like I was talking to someone with an IQ of 160... fantastic! But it’s nowhere near the point where we can rely on AI completely and blindly accept everything it says. When you see some of the mistakes ChatGPT makes, you sometimes wonder if it will ever get to such a point... AI is far from flawless. great related article: the ChatGPT Voldemort syndrome -=-=-=-=-=-=-=-=-=-=-=-=- (this is kinda funny if you get it)-=-=-=-=-=-=-
I asked ChatGPT another question, it answered: The error you describe, where someone incorrectly assigns scores in a scenario-based exam in ICT, can be referred to using the English terms "misallocation error" or "scoring bias." These are specific examples of mistakes stemming from cognitive processes or misinterpretation of the task. Here are some possible causes: Misinterpretation of Instructions The candidate misunderstood the instructions, for instance, failing to realize that each score can only be used once.
Think that's nice? It gets even BETTER: If a Large Language Model (LLM) were to make such a mistake, it might be attributed to what could be described as a "context misalignment syndrome" or "instruction misinterpretation syndrome." These terms highlight the underlying causes of the error in terms of how LLMs process information and execute tasks. Here are some possible nuances to this "syndrome": 1. Context Misalignment SyndromeThe LLM fails to properly align its interpretation of the input context (scenario, instructions, or scoring rules) with the expected output.
0 Comments
What have software testing and the Giza pyramid in common? Well more than you might think!! First of all, when a software tester goes through a thinking process many times their perspectives and insights shift from left to right or up or down. Let me ask you this: do you think the Giza pyramid was a tomb for a Pharaoh? If you do think so, I'd suggest you to watch this video first. Especially mind the 6 minutes and 5 seconds moment where the creator of the video states: IF A PIECE OF IT DOESN'T WORK, IT DOESN'T WORK. This aspect is many many times the case when developing software. Give the video a try and see if your mindsets and views are changing during watching.... If it happens, remember this moment the next time you're testing a functionality or module of an software information system...
View on Prezi itself for FULL view... (better)
|
Categories :
All
120 unieke bezoekers per week.
Uw banner ook hier? Dat kan voor weinig. Tweet naar @testensoftware AuthorMotto: Archives
December 2024
This website uses marketing and tracking technologies. Opting out of this will opt you out of all cookies, except for those needed to run the website. Note that some products may not work as well without tracking cookies. Opt Out of Cookies |