🇺🇸 “Baadel a waader” 🇺🇸
🇺🇸 “Baadel a waader” 🇺🇸
✅ Math is hard
❌ This math is hard
And again…
You’ve just copied my arguments yet again.
Seek help, your projections are concerning.
You don’t really have one lol. You’ve read too many pop-sci articles from AI proponents and haven’t understood any of the underlying tech.
All your retorts boil down to copying my arguments because you seem to be incapable of original thought. Therefore it’s not surprising you believe neural networks are approaching sentience and consider imitation to be the same as intelligence.
You seem to think there’s something mystical about neural networks but there is not, just layers of complexity that are difficult for humans to unpick.
You argue like a religious zealot or Trump supporter because at this point it seems you don’t understand basic logic or how the scientific method works.
Once again not offering any sort of valid retort, just claiming anyone that disagrees with you doesn’t understand the field.
I suggest you take a cursory look at how to argue in good faith, learn some maths and maybe look into how neural networks are developed. Then study some neuroscience and how much we comprehend the brain and maybe then we can resume the discussion.
You obviously have hate issues
Says the person who starts chucking out insults the second they get downvoted.
From what I gather, anyone that disagrees with you is a tech bro with issues, which is quite pathetic to the point that it barely warrants a response but here goes…
I think I understand your viewpoint. You like playing around with AI models and have bought into the hype so much that you’ve completely failed to consider their limitations.
People do understand how they work; it’s clever mathematics. The tech is amazing and will no doubt bring numerous positive applications for humanity, but there’s no need to go around making outlandish claims like they understand or reason in the same way living beings do.
You consider intelligence to be nothing more than parroting which is, quite frankly, dangerous thinking and says a lot about your reductionist worldview.
You may redefine the word “understanding” and attribute it to an algorithm if you wish, but myself and others are allowed to disagree. No rigorous evidence currently exists that we can replicate any aspect of consciousness using a neural network alone.
You say pessimistic, I say realistic.
We know Google Translate gets things wrong sometimes so I was just wondering if Russia means “Special” Military Operation in the same way the Americans mean “Special” Olympics?
Possible, yes. It’s also entirely possible there’s interactions we are yet to discover.
I wouldn’t claim it’s unknowable. Just that there’s little evidence so far to suggest any form of sentience could arise from current machine learning models.
That hypothesis is not verifiable at present as we don’t know the ins and outs of how consciousness arises.
Then it would logically follow that all the other functions of a human brain are similarly “possible” if we train it right and add enough computing power and memory. Without ever knowing the secrets of the human brain. I’d expect the truth somewhere in the middle of those two perspectives.
Lots of things are possible, we use the scientific method to test them not speculative logical arguments.
Functions of the brain
These would need to be defined.
But that means it should also be reproducible by similar means.
Can’t be sure of this… For example, what if quantum interactions are involved in brain activity? How does the grey matter in the brain affect the functioning of neurons? How do the heart/gut affect things? Do cells which aren’t neurons provide any input? Does some aspect of consciousness arise from the very material the brain is made of?
As far as I know all the above are open questions and I’m sure there are many more. But the point is we can’t suggest there is actually rudimentary consciousness in neural networks until we have pinned it down in living things first.
You say maybe there’s not much to understand about the brain but I entirely disagree, it’s the most complex object in the known universe and we haven’t discovered all of it’s secrets yet.
Generating pictures from a vast database of training material is nowhere near comparable.
…or you might not.
It’s fun to think about but we don’t understand the brain enough to extrapolate AIs in their current form to sentience. Even your mention of “parts” of the mind are not clearly defined.
There are so many potential hidden variables. Sometimes I think people need reminding that the brain is the most complex thing in the universe, we don’t full understand it yet and neural networks are just loosely based on the structure of neurons, not an exact replica.
I’d appreciate it if you could share evidence to support these claims.
Which claims? I am making no claims other than AIs in their current form do not fully represent what most humans would define as a conscious experience of the world. They therefore do not understand concepts as most humans know it. My evidence for this is that the hard problem of consciousness is yet to be solved and we don’t fully understand how living brains work. As stated previously, the burden of proof for anything further lies with yourself.
What definitions? Cite them.
The definition of how a conscious being experiences the world. Defining it is half the problem. There are no useful citations as you have entered the realm of philosophical debate which has no real answers, just debates about definitions.
Explain how I’m oversimplifying, don’t simply state that I’m doing it.
I already provided a precise example of your reductionist arguing methods. Are you even taking the time to read my responses or just arguing for the sake of not being wrong?
I’ve already provided my proof. I apologize if I missed it, but I haven’t seen your proof yet. Show me the default scientific position.
You haven’t provided any proof whatsoever because you can’t. To convince me you’d have to provide compelling evidence of how consciousness arises within the mind and then demonstrate how that can be replicated in a neural network. If that existed it would be all over the news and the Nobel Prizes would be in the post.
If you have evidence to support your claims, I’d be happy to consider it. However, without any, I won’t be returning to this discussion.
Again, I don’t need evidence for my standpoint as it’s the default scientific position and the burden of proof lies with yourself. It’s like asking me to prove you didn’t see a unicorn.
Have you ever considered you might be, you know, wrong?
No sorry you’re definitely 100% correct. You hold a well-reasoned, evidenced scientific opinion, you just haven’t found the right node yet.
Perhaps a mental gymnastics node would suit sir better? One without all us laymen and tech bros clogging up the place.
Or you could create your own instance populated by AIs where you can debate them about the origins of consciousness until androids dream of electric sheep?
Bringing physically or mentally disabled people into the discussion does not add or prove anything, I think we both agree they understand and experience the world as they are conscious beings.
This has, as usual, descended into a discussion about the word “understanding”. We differ in that I actually do consider it mystical to some degree as it is poorly defined and implies some aspect of consciousness to myself and others.
Your definitions are remarkably vague and lack clear boundaries.
That’s language for you I’m afraid, it’s a tool to convey concepts that can easily be misinterpreted. As I’ve previously alluded to, this comes down to definitions and you can’t really argue your point without reducing complexity of how living things experience the world.
I’m not overstating anything (it’s difficult to overstate the complexities of the mind), but I can see how it could be interpreted that way given your propensity to oversimplify all aspects of a conscious being.
This is an argument from incredulity, repeatedly asserting that neural networks lack “true” understanding without any explanation or evidence. This is a personal belief disguised as a logical or philosophical claim. If a neural network can reliably connect images with their meanings, even for unseen examples, it demonstrates a level of understanding on its own terms.
The burden of proof here rests on your shoulders and my view is certainly not just a personal belief, it’s the default scientific position. Repeating my point about the definition of “understanding” which you failed to counter does not make it an agrument from incredulity.
If you offer your definition of the word “understanding” I might be able to agree as long as it does not evoke human or even animal conscious experience. There’s literally no evidence for that and as we know, extraordinary claims require extraordinary evidence.
I agree, there is no formal definition for AGI so a bit silly to discuss that really. Funnily enough I inadvertantly wrote the nearest neighbour algorithm to model swarming behavour back when I was an undergrad and didn’t even consider it rudimentary AI.
Can I ask what your take on the possibility of neural networks understanding what they are doing is?
No one is moving goalposts, there is just a deeper meaning behind the word “understanding” than perhaps you recognise.
The concept of understanding is poorly defined which is where the confusion arises, but it is definitely not a direct synonym for pattern matching.
That last sentence you wrote exemplifies the reductionism I mentioned:
It does, by showing it can learn associations with just limited time from a human’s perspective, it clearly experienced the world.
Nope that does not mean it experienced the world, that’s the reductionist view. It’s reductionist because you said it learnt from a human perspective, which it didn’t. A human’s perspective is much more than a camera and a microphone in a cot. And experience is much more than being able to link words to pictures.
In general, you (and others with a similar view) reduce complexity of words used to descibe conciousness like “understanding”, “experience” and “perspective” so they no longer carry the weight they were intended to have. At this point you attribute them to neural networks which are just categorisation algorithms.
I don’t think being alive is necessarily essential for understanding, I just can’t think of any examples of non-living things that understand at present. I’d posit that there is something more we are yet to discover about consciousness and the inner workings of living brains that cannot be fully captured in the mathematics of neural networks as yet. Otherwise we’d have already solved the hard problem of consciousness.
I’m not trying to shift the goalposts, it’s just difficult to convey concisely without writing a wall of text. Neither of the links you provided are actual evidence for your view because this isn’t really a discussion that evidence can be provided for. It’s really a philosophical one about the nature of understanding.
Yes you do unless you have a really reductionist view of the word “experience”.
Besides, that article doesn’t really support your statement, it just shows that a neural network can link words to pictures, which we know.
Well it was a fun ruse while it lasted.
Cavity protection ain’t gonna cut it where they’re going.