Well, we can all see that the AI images are impressive, but fundamentally wrong. We are familiar with a range of 'correct' outputs. However, most of us wouldn't immediately recognise an AI's protein-folding suggestion as wrong - or right. The problem is so complex that the protein structure presumably has to be taken on trust - at least at present.
AI doesn't (yet) have understanding, so it doesn't recognise its mistakes. The algorithms employed are far too complex for humans to understand, so it's often difficult for humans to understand how, why or where a complex algorithmic system made an error. The danger is where AI-generated results are believed by humans, and are implemented in areas where error can't be tolerated. Especially in novel areas of activity, it may be impossible for a human to recognise that the AI output is garbage, and also impossible to 'trouble-shoot' the system to prevent similar, future errors. I'm in no hurry to be driven by an AI-controlled vehicle, when anticipation and experienced judgement are vital and can save lives.