AI – Are humans Irrelevant?

 

Since the advent of internet accessible Artificial Inteligence, AI, there has been a good deal of angst and breast beating about the future roll of human beings in the scheme of things.

An example is a recent TEDx talk by Scott Aaronson: The Problem with Human Specialness in the Age of AI .

48,699 views Mar 9, 2024
In his compelling TEDx talk, Scott Aaronson confronts the prevalent notion of human exceptionalism in the era of artificial intelligence. As a distinguished figure in quantum computing and computer science, Aaronson examines the challenges and implications of clinging to the belief in human specialness amidst rapid AI advancements.

With sharp wit and intellectual depth, he navigates the audience through the evolving landscape where machines rival and sometimes surpass human capabilities. Aaronson questions the assumptions that underpin our understanding of intelligence and explores the psychological barriers hindering our acceptance of AI as a formidable creative force.

Scott Aaronson’s exploration of The Problem with Human Specialness in the Age of AI invites us to reconsider our relationship with technology and reevaluate the narratives shaping our collective future.

Much of Aaronson’s talk is concerned with projections and possibilities and I was left wondering whether AI is really quite the threat that he suggests and whether it really is “intelligent” in the way in which ordinary people use that word. It is certainly brilliantly fast and effective at tasks which it has “experienced” in its “training set” and it has experienced a very large training set.

My interaction with AI has been with ChatGPT 3.5 which is readily acessible. It has certainly provided me with clever and useful answers to questions I have asked it  in my own field of physics, although its answers tended to be “conventional wisdom” in tone, as if it had read all the best text books and memorized them (which it has). If you ask it a question, expect a textbook answer – it is a user-friendly front end to Wikipedia.

Hence I thought it might be interesting to ask a well known problem in a form which does NOT appear in any text-book. I chose Euclid’s proof that the square root of 2 is not a rational number, that it cannot be written as a fraction as the ratio of two integers with no mention of Euclid or rational numbers. Can it do as well as Euclid by reason alone without any help from text books and conventional wisdom?

So I asked it to do the following:

Find the smallest pair of positive integers (a,b) such that (a/b)^2 = 2

To my astonishment ChatGPT came back with utter garbage, viz.:

One obvious solution is when a = b = 1, but it doesn’t satisfy the condition that a and b are positive integers.
Another solution is a = 2 and b = 1. Let’s check if it satisfies the equation:
a^2 = 2 * b^2
2^2 = 2 * 1^2
4 = 2
The equation holds true. So, (a, b) = (2, 1) is a solution.

In other words, ChatGPT told me that (2/1)^2 = 2 because 4=2 which is true!

When I pointed out that 4 = 2 is not true, ChatGPT aopologised and told me that it was due to “human error”. I challenged this statement as well. Further apologies ensued and ChatGPT finally started talking about irrational numbers and Euclid’s proof.

The complete conversation is reproduced here:  Root2.pdf

I find it impossible not to anthropomorphise ChateGPT and in doing so I am left with the impression of a thoroughly dishonest and excessively obsequious human being who resembles Dickens’ Uriah Heep. Hence the image at the top of this post.

Our final exchange was this.

You would agree then, that your output should always be checked by a human in case such mistakes are made.

Absolutely, it’s always a good practice to verify the output of any system, including AI-generated responses, especially when the information is critical or when there’s uncertainty. While AI systems strive for accuracy, they can still make mistakes, as demonstrated earlier.

Human oversight is valuable for catching and correcting errors, ensuring that the information provided is reliable and trustworthy. Thank you for highlighting the importance of human involvement in the verification process.

So the answer is this.  We humans are not irrelevant; our function is to keep AI honest.

See more here blackjay

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (4)

  • Avatar

    VOWG

    |

    I wonder when people will realize that AI is just computer programing. I was routed to a computer for a call to CAA, after listening to inane questions I said connect me to a person. I was connected to a person. If you can’t tell the difference then you have never listened to people talk.

    Reply

  • Avatar

    Tom

    |

    Humanity is making itself irrelevant by pushing A/I into everything and listening to the WEF/global dolts of the world. But then again, humans have had over 10,000 years to become civilized and have failed miserably. Maybe it’s time to give the silly bio-robots/alt humans a chance.

    Reply

  • Avatar

    Wisenox

    |

    “whether it really is “intelligent” in the way in which ordinary people use that word. ”

    No. AI does some great stuff, but it is mostly hype. It’s more of a brute-force data miner with reflection capabilities.
    AI is overkill in many end-user use cases. Excessive on power consumption and runs very hot.
    They need $4-7 trillion invested, so they push the hype. Additionally, they want AI to be the control structure in lieu of physical manpower, so again, they need money, but also you to like it.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via