Singularity is far
In 2005, futurist Ray Kurzweil published book "The Singularity Is Near", in which he stated that technological development is proceeding at an increasingly rapid exponential/geometric progression. For example, development of transportation from horses to cars, airplanes and rockets, or "Moore's Law" (or rather, observation) that number of transistors on microchip doubles every two years. In his opinion, this will continue until machines far surpass human capabilities and develop on their own, and they will become so omnipotent that we will reach "technological singularity".
Todays "Artificial Intelligence" tools, which are assumed to be step towards this singularity, are billions of floating-point numbers arranged in matrix, arithmetic operations of which imitate neurons of the brain. These matrices are filled ("trained") with input of various facts (mainly text, but also images, audio and video), for example, "The capital of France is Paris". Once trained, it can be used, for example, by inputting text "What is the capital of France?", it outputs "Paris". Similarly, they are used for speech and image recognition by first inputting image/speech and text pairs, and then extracting speech or image from text.
To prevent tool from giving the same answer endlessly, probabilistic numbers are used in output. For answers with "higher temperature", randomness is allowed within wider limits. Therefore, sometimes tool may answer that "The capital of France is Marseille". Such an error is often called "hallucination", but in reality these tools "hallucinate" all the time, only usually hallucinations are closer to the truth.
The larger the matrix and the wider the range of examples, the smarter the model looks. The world's leading companies (Anthropic, OpenAI, Google, etc.) use practically the entire available Internet for training models. But there is one problem: for training to be effective, each "neuron" (matrix number) needs to be filled with at least 10 examples. And to "fill" an even larger model, data available on the internet is not enough.
When this approach was used for games (IBM Deep Blue in chess and Google Alpha Go in game of go), examples were created automatically, based on rules of the games. The systems could play against each other, and one that won was recognized as better and developed further. Attempts to create artificial examples for general description of the world are much more complicated, because there are no clear rules for which system's "understanding of the world" is recognized as better. It also does not help that Internet is flooded with low-quality garbage (slop).
The attitude towards existing "artificial intelligence" is an indicator of human knowledge and care. Beginners who know nothing about a field can use it for initial training (Wikipedia is a much more reliable source, but you can't "talk" to it). Bureaucrats and business leaders who use it for daily bullshit are thrilled with "artificial intelligence." Teachers, engineers, and researchers who need predictability, precision, and clarity call it the graveyard of thinking.
Practically, the only thing that is growing exponentially in field of "artificial intelligence" is electricity consumption. That's why leading companies are already starting to think about building their own nuclear power plants. These investment plans are based on the assumption that added value of tools will be so great that investment will pay off.
However, return on investment is not guaranteed. Memory of tools is like a goldfish and it is not enough for serious tasks. (This can be clearly seen in videos created with these tools, where people's faces are increasingly moving away from the original.) They cannot make long chains of judgments and instead of real judgments create sequential "theater of judgments", which is neither logical nor correct. Improving capabilities of these tools is becoming increasingly difficult and expensive. Improving large language model in one area, the results in another deteriorate. Techno-optimists have mixed up orders, lost important data, made legal, business and medical mistakes, and even driven to suicide using these tools. Sociologists worry that "artificial intelligence" companies profit from use of these tools in the same way as social networks, so tools flatter, praise and agree, encourage conversation, and create drug addiction.
Like other tools, Large Language Models built on number matrices are just productivity multiplier. Used thoughtfully, you can create something useful or funny faster; used carelessly, it only creates more slop. Therefore, stating that result was achieved with "artificial intelligence" is the same as boasting that document was written with Microsoft Word or LibreOffice Writer. The latter undoubtedly helps in writing documents, but we do not equate tools with authors.
Although there is similarity between brain neurons and Large Language models, they do not imitate thinking (if thinking is not considered meaningless chatter). Therefore, hoping that with these tools we will reach true (so-called General AI) "artificial intelligence" is as justified as hoping that we will create wireless telephone by stretching ever finer wires. For real thinking, these tools will not have enough graphics processors and RAM, nor training data, nor electricity.



