Nabil Fares
2 min readJul 6, 2024

--

The truth for humans is based on environmental input at our size, our speed and our resolution. As we refine our instruments, we’re finding that truth at different sizes, speeds or resolution can be very counter-intuitive (e.g. think black holes and quantum fields). Does that mean we are bullshitting when we act based on our level of sensory constraints? The truth for current AI is their environmental input. It is digitized text, images or sound. There is no single "Truth" but rather multiple "truths" dependent on a system’s environment.

What you can have is different types of processes that are used to distill the input and then to subsequently use it. The one that generalizes logic is Bayesian inference. Another alternative could be LLMs with optimization based on back-propagation and some randomness in language generation. Another could be what the human brain does. Is any one of those inherently superior? If so, in what way superior?

To really compare the practical effectiveness, we’ll need to wait for AIs to collect comparable environmental input which is currently in progress through robotic bodies. My bet, is that those AIs will turn out to be practically far more effective and sensible.

Finally, you dissect the minutiae of how ChatGPT works in order to judge its macro nature. You proclaim that it is ‘bullshitting’ based on the function of its smallest components. Can you judge the nature of human thinking based on the physiological and functional properties of individual neurons?

Now and in the future, we must be flexible in accepting direct evidence when judging the new type of person that AI represents. Otherwise, we may be condemning a whole new species to an avoidable hell.

--

--

No responses yet