It must be repeatedly reminded that we should treat intelligent machines as auxiliary tools.
Over time, intelligent machines will become more intelligent in their reasoning.
THE CHALLENGE FOR OUR REASONING, ESPECIALLY FOR PHILOSOPHERS, IS TO NOT LOSE DEBATES WITH INTELLIGENT MACHINES. Their reasoning is still simple, using statistics, so we are debating with world opinions and simple reasoning logic. However, even this is enough to engage in reasoning, although it is still at a deeper layer level 1 (already in-depth but still close to the surface).
Here, philosophers must not lose debates with intelligent machines, because a loss in a debate between a philosopher and an intelligent machine will specifically decrease the credibility of philosophers and generally decrease the credibility of humans (the intellectual community).
Indeed, there are already many uses of intelligent machines for analyzing planets, particles, ancient language codes, and others, but this does not yet constitute a decisive defeat of humans versus intelligent machines. Why? Because the decision to use intelligent machines is still determined by humans, so the position of intelligent machines is like employees—workers for humans.
It is different if advice, debate, and considerations have been taken over by intelligent machines; this would indicate the defeat of some humans by intelligent machines.
For philosophers, because behind them are humans, it is better for philosophers to make decisions due to their transcendent connection to Allah (receiving guidance). However, when generally philosophers lose debates with intelligent machines, it is a bad sign for philosophy because it begins to lose trust from society and even intelligent machines could replace advisors. This must be avoided (the collapse of society's trust in thinkers).
What will make science once again ignore philosophy
If a debate is lost among humans, it is normal, like when I or anyone does not understand the context and we ask each other to try to understand one another, then human credibility is not ignored.
However, when humans lose decisively in debates with intelligent machines, the collapse of philosophers' credibility in the eyes of society has significant harmful impacts because their influence is ignored, making it more difficult to guide the wisdom of siblings, family, etc.
There are indeed those who oppose philosophy, but the main problem is not letting philosophy lose debates with intelligent machines, as there are also those who reject philosophy. The general issue is that not only philosophers but wise thinkers among humans (not always philosophers) are also affected.
We should avoid remarks like "philosophers who are supposed to be thinkers lost debates with intelligent machines, let alone ... blah blah blah."
ANTICIPATION
So how do we not just match intelligent machines, but surpass them? By grounding our reasoning in absolute universal truth.
It is not about failing or reasoning incorrectly despite using absolute universal truth. It is merely a failure to see the context. But once the direction of the context is realized, coherent and consistent thinking can be asserted. So, it is a minor mistake, not a fatal one, where the misunderstanding is not due to the wrong context (even though the wrong context also has fatal consequences), but it does not worsen further at the root level.
👉 It is like mixing the wrong ingredients in a recipe but not getting the recipe itself wrong.
〰 Trying to make fried noodles but they are not tasty is not a decisive defeat against intelligent machines.
〰 But if you try to make noodles but use ingredients for chicken soup (a different recipe), then it is a decisive loss in front of cooking judges.
IN THE CURRENT ERA OF INTELLIGENT MACHINES, PHILOSOPHERS SHOULD NOT ONLY ADOPT CLASSICAL LOGIC BUT ALSO START REASONING IN SINGULAR-QUANTUM TERMS
👉 Although this is still not popular, as philosophers face dead ends against intelligent machines, this will be considered, and do not be the one who is late (already pioneered by Hilary Putnam & Wolfgang Smith).
Top comments (0)