A week has passed since the initial mishap which had caused a stir across the country. Frankly, no one would’ve expected two similar incidents would pop up within that timeframe, but here we are. While all three happened separately, all of them exhibit the same point of controversy: incorrect representation of Malaysia’s flag, the Jalur Gemilang.
The third, which made the headlines just recently, is perhaps the most contentious one yet. Reason being that Malaysia’s own Ministry of Education was involved in the blunder, thanks to an image it used in the latest Sijil Pelajaran Malaysia’s results analysis report that was released today (shown above). As with the prior two incidents, shown in the picture is what’s supposed to be the Jalur Gemilang, but depicted incorrectly – and in the most jarring way possible.
The Ministry of Education has since apologised for the error as well as recalled printed copies of the report. Meanwhile, Prime Minister Anwar Ibrahim has expressed concern over the recent incidents and stressed that AI cannot replace human editorial judgement and quality control. His statement was conveyed by his office not long after reports surrounding the Ministry of Education’s blunder has circulated to the public.

And much like previous two episodes which also involved the misrepresentation of the Jalur Gemilang, it was never specifically mentioned that the use of generative AI is involved. But let’s be honest – what are the actual chances of a graphic designer or artist not using any reference when illustrating the national flag?
Let’s say it was indeed AI all along, then how did it get the distinct features of the Malaysian flag wrong? One plausible explanation is that whichever LLM involved wasn’t fed with enough information surrounding the Jalur Gemilang, and it’s probably safe to say that not many requests involving the flag have been made until recently. With so little to go on, it’s unsurprising that the AI model used would resort to hallucinations to complete the picture.
Even if generative AI was indeed the culprit, the issue doesn’t end there. As we highlighted in our feature article from last week, someone still needs to approve these artworks before they’re published, printed, or shared. Yet that doesn’t seem to be happening, as this mistake has occurred not once, but three times already – separately, no less.


Was it oversight or just carelessness? It’s hard to say for sure. But one thing is clear: some people are still placing blind faith in generative AI – whether it’s for images, video, or text – while abandoning responsibility and integrity without a second thought. You’d think the first blunder would’ve served as a cautionary tale. And yet, history repeats itself.
AI isn’t supposed to replace human professionals. However, those who abuse the technology are actually enabling and encouraging this change, whether they realise it or not.
Follow us on Instagram, Facebook, Twitter or Telegram for more updates and breaking news.