Philosopher born before modern neuroscience starts going on about "the mind"

>philosopher born before modern neuroscience starts going on about "the mind"

Attached: istockphoto-153017150-1024x1024.jpg (683x1024, 265K)

External reality and thus the scientific method are just hypothesis in your mind faggit

Modern neuroscience is just old philosophical speculative statements regarding the way the mind works, reworked into contemporary mumbo-jumbo popular around neurology/psychiatry circles about the parts of the brain, serotinine-dopamine, and stuff like that.
Anybody who claims that it's impossible to embrace both a philosophical view of the mind and scientific materialism is limited in terms of perspective and excessively dualistic.
Most of Yea Forums is autistic, so I wouldn't be surprised if others didn't agree with me.

>Most of Yea Forums is autistic
don't know if projection or misuse of memespeak

The mumbo jumbo only exists because we don't have computers and algorithms that are efficient enough to solve the problem.
And while we don't have them yet, at least we understand that mind is a product of the material world.

How do we know that(asking out of curiosity)?

People asking questions is gay as fuck?

People asking questions is fine, but motherfuckers like the Bruhmin who wrote the Upanishads or Spinoza tryin to guess stuff about the way the mind or the universe works without a shred of evidence, and then claiming it to be irrefutable truth sounds contemptible.

Yeah don't you just hate people who jump to conclusions about the material without reading it first? Beneath contempt, I'd say.

>we understand that mind is a product of the material world.

Attached: 1563744183554.gif (390x285, 1.74M)

My conclusion is I like you.

Human beings established belief systems to explain phenomena thousands of years before the technologies exist to (in)validate them? Whoopidy fucking 23 skidoo.

I've read dozens of books on philosophy, and I find no more of a reason to believe Spinoza than I would to believe fantasy fiction.
Humans had eyes, ears, and hands for for hundreds of thousands of years before philosophical induction or deduction were first thought of.

>I've read dozens of books on philosophy, and I find no more of a reason to believe Spinoza than I would to believe fantasy fiction.
Pity he squares with neuroscience then.

The material world is a product of the mind.

>evidence
Part of the mind. Something addressed and explored when talking about the mind. Bro how do you fail to into basic philosophy. Brain doesn't map to mind. Mind doesn't map to brain.

>believe

>on
>philosophy

So how is Spinoza wrong about the mind? You've read him, right?

>itt brainlets

The worship of STEM and science is modern peasantry

> Bruhmin who wrote the Upanishads or Spinoza tryin to guess stuff about the way the mind or the universe works without a shred of evidence
The Upanishads are revealed texts not the product of guesswork

neuroscience suck a fat one tell me about the way out of here or fuck my ass

>we don't understand how consciousness and free will works
>meanwhile, machines learn are making autonomous decisions already
The sad part we still don't know how it works, not even in machines we build. Philosophy is asking good questions, but their answers are ridiculous, especially the p-zombie ending the question with "machines can't think".

GRUG SAYS DESTROY CIRCLE WHEEL NOW, SQUARE WHEEL WORK SAME, SQUARE WHEEL EASY TO MAKE WITH PURE REASON

>he thinks machines are sentient


hahahahah oh no no no

Why machines can't be sentient? Aren't nematode worms? Aren't chimps? Aren't women?

>b-but sentience is capacity for reason and self-awareness

Machines cannot produce art at a level that suggests sentience.

There's a critical difference between sensation and mere input... A true intelligence needs to be able to feel some way about its experience -- something that might only be possible via biological processes (I'm not certain of that, but I suspect you need the rich, dynamic playground of organic chemistry for things like sensation to emerge).

Nor can women, or even most of STEMlords for that matter. I'm using the philosophical definition of sentience - qualia, ie ability to experience.

What is it like to be a bat? What is it like to be nematode worm? What is it like to be a Go playing AI, surviving for billions of generations being pitted against itself? What is it like to be a woman?

>organic chemistry
Problem is, this is already computable for small lifeforms. Why should be internal feedbacks specific to DNA life? LTSM RNNs are built on internal feedback too. In brains, this is mediated by ion channel cascades - as far as biology goes, we have fairly good understanding how it works on a very low level nuts and bolts - it's the high level stuff we don't understand. It's the same thing as with DNA - we know how the very low level encoding works, but we don't know how the insanely large regulatory networks expressed there fit together.

Human artists in every field will be obsolete within the next 30 years

It's not just a matter of computation... In the end a simulation is just a simulation or it would be the real thing. It's a question of whether a non-biological substrate can undergo the actual kinds of dynamic changes necessary to produce actual sensation (even just simple sensation, like the nematode); if this were possible (I yet to see anyone propose how), it would be the key engineering sentience.

I'll believe it when I see it.

>women don't process qualia

You're reaching. Besides, you made my point for me: machines can only simulate the "art" at its most algorithmic. Think Thomas Kinkade.

Consciousness is substrate-dependent. You can't simulate metacognition, if you could, it would be metacognition.

What is this from

To solve what problem? As far as I know, we don't know enough about the brain to adequately pose the problem. Algorithms and computers don't mean diddly shit without a problem statement.

heh,...
why don’t you read kant and tell me more tomorrow

>we still don't know how it works, not even in machines we build

Lmao, no. The best AI currently is literally just linear algebra, probability and calculus mah dude.

>algebra, probability and calculus
And brains are just chemistry. It's not about the LA, but the resulting model we don't understand. Intuitively it shouldn't work, but it does.

Organizational invariance is indeed at the heart of the contention. Why simulated atoms are now unconscious, just because they live in separate (simulated) sub-universe?

Furthermore, even in our reality, shouldn't you be able to experience every atom of self? You clearly don't, you filthy p-zombie. What we experience is external stimuli and only extremely high level functions of our body, without us having zero experience of the actual substrate - the actual chemistry machine "running" us.

The onus is on you: why should 1s and 0s simulating the behavior of an atom in a virtual environment be equivalent to the things themselves?

They aren't - one is real, one is virtual. But their behavior is equivalent. Is consciousness a thing, or behavior (in either case, including self-perception via in-universe feedback). Reminder that this argument isn't about what is real and what is virtualised (universe it lives in), but whether consciousness can even exist as a virtual entity.

Because computer's don't do philosophers any favors - virtual universes where entities resembling primitive organisms can be simulated, and behave *exact* same as the real thing. They aren't zombies with "preprogrammed" responses, they're copies of the real thing (granted we can barely manage on the level of single bacteria cell, but still).

It is behavior + substrate, reducible to neither.

Spinoza literally got the mind right though and is basically consistent with neuroscience. Suck my dick his answer to the mind body problem is the only one that makes sense.

If you knew anything about philosophy of mind you'd know that Kant gets btfo.

The surest sign of an internet dilettante is when they invoke old philosophers like they're still any sort of authority.

Ok. Why is organic chemistry superior substrate to transistors? Why should a robot need to have feedback feedback for every transistor gate? Why it's physical arms, eyes and senses be enough - just like it is for you. Why don't you have feedback for every atom of self?

I don't know, it's something inhering in neurons themselves, and without getting into my occult reasons, a machine is more programmed, more determined, than an organism. An organism waits on nature for instructions, a machine on an organism. Besides, like I said, no machine has ever produced something of a quality that's convinced me there is a real, living awareness inside it.

Well, since you're not a dilettante, how does Kant get btfo?

Attached: kant.png (641x350, 64K)

> machine is more programmed, more determined
Machine intelligence is not deterministic, it's a black box model, humans design only the substrate (hyperparameters) for, but have no means of knowing how it makes its intuitions in the end. The local minima constraints are strange attractors of the corpus/solution space. It's not programmed, it's emergent.

>no machine has ever produced something of a quality that's convinced me there is a real, living awareness inside it
You're talking about reason. Consciousness is much below it. In terms of reason, machines can't indeed do much yet.

>The sad part we still don't know how it works, not even in machines we build.
Your lack of knowledge is embarrasing. Of course we know how the AI systems we build work.

>Machine intelligence is not deterministic
Bullshit. You couldn't derive the gradients if it isn't deterministic, the basis of current AI research
>it's a black box model, humans design only the substrate (hyperparameters) for
The design of the parameter space lays implicit in the choice of training data and network design
>but have no means of knowing how it makes its intuitions in the end.
What is latent space analysis - the post
>The local minima constraints are strange attractors of the corpus/solution space.
Back to mumbo jumbo we go

ASS

>You're talking about reason. Consciousness is much below it. In terms of reason, machines can't indeed do much yet.

Not even that, even that essay bot generator can't produce something that you can tell came from a mind that knew what it was writing. It might wow the plebs, but not the people who have done their homework. Try it for yourself, insert a philosophical prompt, even a line from a real work, and watch it go.

Is the joke that your image demonstrates the end of any evidence of the mind after its physical destruction?

>training data and network design
Existence of initial state doesn't imply it's predictable.
>The design of the parameter space lays implicit in the choice of training data and network design
Have you ever tried it? The choice is more akin to voodoo. If the system was fully white box, we wouldn't do that. We'd simply use the most optimal choice.
>What is latent space analysis - the post
Is used to reduce dimensionality to fit topic. It doesn't tell *why* certain vectors emerged in the model in the first place, it only determines those which are important.

>Existence of initial state doesn't imply it's predictable.
Funny that you don't cite my argument of the requirement of determination to calculate gradients which refutes exactly this
>Have you ever tried it? The choice is more akin to voodoo. If the system was fully white box, we wouldn't do that. We'd simply use the most optimal choice.
Yes, I have tried that in the context of transfer learning. Speaking of vodoo shows your lack of knowledge user, this is wired-tier language to describe the training of networks.
>Is used to reduce dimensionality to fit topic. It doesn't tell *why* certain vectors emerged in the model in the first place, it only determines those which are important.
It is not only used for dimensionality reduction but for understanding how and why a network works in a specific way. Did you just google latent space analysis and skimmed the results? Because it clearly sounds like that.

>the mind is a product of the material world
This is your brain on america

IRREDUCIBLE MIND

There is no equivalence. If one could simulate the processes of a brain down to the smallest details, it would still be a simulation of those processes and not the processes themselves. You can't separate behaviour from processes -- without them, the behaviour itself is simulated. That is the esence of virtuality... It's fundamentally detached -- an abstraction of what is modelled. If we could engineer a feeling being, it wouldn't be virtual as it's substrate (whatever that was) would be an integral part of the processes. Computation alone can't feel itself.

Our experience is an emergent phenomenon... Emergence is dependent upon substrate. I don't see how lack of granular awareness poses any problem there.

Carbon can form a crazy myriad of stable chains -- stable enough, yet with incredible dynamic potential. This is why all life is likely carbon based, and by extension why sensation depends on this chemistry as well. Computer substrates -- while much more rapid in operation than biological ones -- do not have the same potential for a huge variety of molecules and variety of interactions between them, which I think is key to emergent phenomenon like sensation.

>We still can't solve it
>But I'm gonna give you the answer anyway because that is science
Fuck off

When will they even learn?

to effectively work with the mind you have to reach into the realms of art and aesthetics.
to describe and dissect it like a dead thing is all well and good but it's wholly a destructive and reductive activity

CAN YOU GOOGLE WHO DAVID FUCKING CHALMERS IS BEFORE POSTING YOUR SCIENCIST NONSENSE HERE
CAN YOU REALIZE THAT A GUY WHO TAUGHT EVERYTHING WA MADE OUT OF WATER WAS ONE OF THE LEADING ANCIENT PHILOSOPHERS

>low IQ philosopher
dude lmao neuroscience doesn't matter you can't understand consciousness lolololol

>high IQ philosopher
Neuroscience reveals more effective strategies to live our values, and explains why we value certain things, but philosophical inquiry is still required to provide a structure around our values that makes them applicable to the world in a consistent way and in a way consistent with our values regarding procedural fairness etc.

Neuroscience does not replace philosophy, but rather focuses and enhances its potential to affect people's behaviour. Neuroscience can give people the power to overcome their own flaws and live their values in a way that they have never been able to do so before. It stands to reason that now more than ever we need philosophers to build rigorous structures to support a new world of empowered individuals newly capable and desirous of acting ethically.

This isn't just limited to neuroscience, either. We have the technology now to keep track of our food so that we can know the conditions in which it was produced and then judge whether that method of food production is consistent with our values. People are capable of doing that now, and they're interested in doing so. The new role of philosophers is to connect the dots to construct value frameworks that people can actually live.