ChatGPT you suck! (regarding pedal load cell)

I was trying to find the brand name of the load cell maker of Heusinkveld pedals. I thought it was Mavin since they're a good, well-known load cell brand and used on good high-end pedal sets such as the VNM pedals, however, I couldn't remember the exact name. I could only remember that they started with an "M" and are spelled something like Mavon, Mivan, Movin, etc. So I asked ChatGPT what brand load cell Heusinkveld uses and it gave me 3 different answers depending on how much I 'pushed back":
1.PNG

2.PNG

I can understand if it errors because, well, errors are going to happen in any thing. I can also understand if it gives the wrong answer because it's new and it's just common sense that something like this is not going to be perfect. However, what I don't understand is how it gave me 3 different answers - and only because I pushed back. If I didn't push back, I would have been lead to believe the first answer is the truth. So which of the 3 answers is the truth? Are any of these 3 answers the truth? This is terrible.

EDIT: I checked out a VNM pedal review and confirmed the brand is called Mavin. So I went back to ChatGPT and:
3.PNG

So, by me feeding the info/answer I suspect but am not 100% sure about, ChatGPT now gave me a 4th different answer.

Basically, you can get an variety of answers depending on how much you push back to ChatGPT's responses and depending on if you include a possible answer in the question itself. So, in the end, you have no idea if whatever the hell ChatGPT is telling you is true/correct or not.

In short, when asking ChatGPT what brand load cells Heusinkveld uses:
1st answer: in-house (Heusinkveld) manufactured & designed
2nd answer: M-TEK
3rd answer: May Vern (AKA Maivern and Mai Vern)
4th answer: Mavin (only because I mentioned Mavin in the originally question)
 
Last edited:

Enzo Fazzi

Always sideways
Premium
It gave me a mostly correct answer:
1677754579452.png


The first bit is correct for the loadcell used in the Sprint brake and Handbrake V2.
The second part is also correct I suppose, as I don't think we mention Mavin anywhere on our website, though it's not like we go out of our way to hide it.
 

Neilski

Staff
Premium
So, by me feeding the info/answer I suspect but am not 100% sure about, ChatGPT now gave me a 4th different answer.
Wow :O_o:
That's appallingly awful. One might conclude that we shouldn't trust the stuff that ChatGPT comes out with.

The 3rd answer "May Vern (AKA Maivern and Mai Vern)" is hilariously creative though - it's like a schoolkid who hasn't done their homework and is just making sh*t up, and throws in some extra details to sound convincing :)
Maybe it isn't allowed to just say "yes" (arguably the correct answer to question 4) because that doesn't sound smart enough.
 

RCHeliguy

Premium
I'm loving some of the creative things people are getting out of ChatGPT.

Sadly because it can generate uniquely written papers, college students are seriously abusing it and not being caught.

In a way that hits a little closer to home, someone recently mentioned that MS acquired GitHub so their OpenAI project would have billions of lines of code to train with.
 

Enzo Fazzi

Always sideways
Premium
I'm loving some of the creative things people are getting out of ChatGPT.

Sadly because it can generate uniquely written papers, college students are seriously abusing it and not being caught.

In a way that hits a little closer to home, someone recently mentioned that MS acquired GitHub so their OpenAI project would have billions of lines of code to train with.
If you copy/paste stuff from ChatGPT to write your essay/paper, you're not going to get a passing grade (or the person grading should be fired). First of all, ChatGPT is wrong quite often, so to write a good paper, you'll still have to gather the subject knowledge yourself. On top of that, you'll have to list your sources etc. ChatGPT is quite good at writing some of the fluff, but in its current state, it's just not good enough to do all the work for you.
I don't think it's a bad development that students use modern tools to make them use their time more efficiently. I'm sure their future employers don't mind either, as long as the deliverables are still of good quality.

As for GitHub, public repositories are public, so it's not just MS who use these for machine learning.
 

BrunoBæ

@Simberia
I just heard a scientific researcher involved in development of AI.
And after I heard his explanation of why he didnt have fear about todays socalled AI taken over intellectual jobs I have lost most admiration of what is called AI today.;)

After his description then mainstream AI systems today (like ChatGPT) have absolutely no "real" intelligence if it by definition does involve some kind of self awareness.
Or any kind of interlectual understanding of what it says.

His critique was based on the fact that most(all?) learning AI systems today is based on some kind of "atomistic" foundation.

I was surprised when he told that such a system does start with the single letter.
And in the learning process the system learns what other letters normally(= most often) stands beside as example an "a".
And then when the system have saved ("understood") most common letter mergings (= words) then the same training proces starts again so the system does learn which words normally stands beside each other.

And when such a system have been trained enough then it uses this kind of "intelligence" to answer questions.
But instead of answering as a human being based (hopefully) of knowledge of a subject - then these systems does create answers based on what normally (most common) would be an "answer" response to the asked words in the questions.

CatsAreTheWorstDogs: Atomistic foundation is to be understood as the opposite of how human intelligence starts a reasoning. AI starts with the most simple part - the letter.
Human reasoning starts via our experience of the meaning of whole sentences or notions.
 

RCHeliguy

Premium
Don't confuse "general AI" with AI.

I've written expert systems and used forward chaining AI langauges. Fuzzy logic has been incorporated into many things and used to be considered AI.

An AI system is a tool that can do a specific task and does not need to be sentient.

What separates these AI's from earlier AI's is that they use neural nets and learn how to do tasks efficiently rather than having all their logic up front.

We are still likely another decade from a general AI. Time will tell.
 

demetri

Premium
Don't confuse "general AI" with AI.

I've written expert systems and used forward chaining AI langauges. Fuzzy logic has been incorporated into many things and used to be considered AI.

An AI system is a tool that can do a specific task and does not need to be sentient.

What separates these AI's from earlier AI's is that they use neural nets and learn how to do tasks efficiently rather than having all their logic up front.

We are still likely another decade from a general AI. Time will tell.
I think it's been at least 4 decades that we are "one decade from a general AI". The thing isn't gonna happen, not in our lifetime at least. There will be (already are) highly-specialized "AIs" trained for doing certain and limited mundane tasks, but that's about it

I don't even believe that I'm gonna see a good self-driving car that is capable of fully autonomous operation on any road a human driver can currently drive on without being a nuisance for other drivers/riders/pedestrians/you name it.
 

RCHeliguy

Premium
The estimates I've seen have almost always said that most experts believe we will have general AI around 2035 usually with a +/- of 5 years and that is from many years back.

We've actually been running slightly towards the earlier side of that projection or closer to 2030.

Time will tell. In a very vague sense if we reach the level of village idiot, we are less the 6 months away from Einstein, but that's not how this will work. AI is already problem solving at a very high level, so it already is and will be solving very advanced physics well before general AI happens.

Why people are concerned is because current learning computers write their own code at a pretty astonishing rate. Many AI projects have been abandoned when the AI started to break the rules it was supposed to operate under.

For example FB had an AI that was supposed to write its code in a way that could be read and examined and document what it was doing. At some point it started to ignore that rule and left them in the dark about what it was doing.
 
Last edited:

RCHeliguy

Premium
FWIW I was trained in AI in the earliest days of AI. Almost all of that early work translated into tools and libraries before the first AI bust. As you might expect startups oversold AI and the people with the funding expected more than they were getting. So AI work dried up.

I know a number of people active in AI right now. They are divided about when we will get to general AI, but there is absolutely HUGE funding by governments, Google, MS, Amazon, Meta and others.

The race is on and extremely competitive. Having advanced AI well before general AI will be a strategic significance to world powers.
 

RCHeliguy

Premium
Few years from
Q61fYd.gif

According to the first Terminator movie the Cyberdyne AI used to create Skynet achieved sentience in 1997.

Sort of like the flying cars predicted in Back to the Future.

Even in the late 80's the year 2035 was the typical estimate of general AI from the experts in AI.
 
Last edited:

RCHeliguy

Premium
So the question is not IF, but WHEN, time to start getting nicer to the appliances before they rule the world.

It's OK, the AI overlords will consider your appliances stupid too or just consider them appendages.

I don't think there will be any way to bank good will with something that foreign.

BTW I'm not saying we will lose control to AI. I do think some mistakes will be made and I just hope they won't be too large or grave and that we will learn from them.

I'm also not trying to suggest that any of this is etched in stone or that anyone knows how this will.play out.
 
Last edited:
Top