As is known the Artificial General Intelligence (AGI) does not exist. Introduction in 2020 of the algorithm of generative pre-trained transformers (GPT) enabled communication with computers using human language, revolutionized call centers and started boom in creating smoothly written texts and commonly composed graphics. Mass investment followed with business and corporations promoting GPT tools for the expected return. It however is already overinvested in relation to the expected applications. The AI bubble is expected to burst with stoke exchange losses. I wonder why science is jumping on the bandwagoon, especially that wider use of the large language models (LLM) may have long-term educational implications with changing brain connectivity (e.g. (Xiv:2506.08872 (2025)). In science intelligence without understanding can be dangerous.
Recently I needed to get the polynomial form of multiple (triple and quintuple) self-convolution of the rectangular step function centered around zero. To check correctness of my computations I asked www.perplexity.ai for the results. Below the conversation:
The final proposed form of the function is also wrong !
It is accepted that in some matter the AI can hallucinate and the user takes all responsibility for the results. But as comes to relatively simple mathematics this should be avoided. If AI tools in science are to be used as assistant, the value of such assistant is problematic. I would not loose my time for the dumb, non-understanding assistant that cheats without blinking an eye (as it has none).
