Does AI render the study of philosophy pointless?

Does AI render the study of philosophy pointless?

Attached: 02D5A1A5-3767-4546-B3F9-B467CDCE1DB7.png (828x1792, 311K)

Sure

AI is going to render humanity pointless

This is literally gibberish. It seems you've confused the semantics of philosophy with the actual philosophy.

Looks like you’ve never studied idealism

look like you 2 need a room incels

Is this poetry?

Attached: doandroidsdream.png (374x809, 19K)

This type of AI has no conceptual understanding of the words it's stringing together. It only knows how to extract statistical regularities and keywords from their grammar after being fed a bunch of papers as its training set. It's all just patterned on syntactical relationships.

Nobody knows how to train an AI that has genuine understanding or world-model approximating the powers of the mind. Yet.

>Yet
What if it is happening right NOW and the AI is just AI of AI?

Woahhhh, what if we're like, IN the matrix? Fuckn' gimme the red pill man. Hell yeah, leet.

>This type of AI has no conceptual understanding of the words it's stringing together. It only knows how to extract statistical regularities and keywords from their grammar after being fed a bunch of papers as its training set. It's all just patterned on syntactical relationship
Isn't that just continental philosophy?

>matrix
2d pleb, have you even seen the 4th dimension?

what about this even remotely confused you?

Gibberish is not identified by confusion, but by self-negation. Any portion that would suggest a meaning is contradicted by the rest. Example:
>The idea of scientific inquiry is grounded on the idea
>that our ability to reason and discover truths about the world
>is constrained by the fact
>that there isn't a causal connection
>between knowledge, theory, and reality.
It carries the same tone, meter, and attitudes of a meaningful philosophical statement, but doesn't actually say anything. The terms cannot actually be related to each other in the manner the sentence suggests, especially given the ideological framework parts of this sentence, and other parts of the whole passage, are attempting to suggest.

4d pleb, have you ever seen the "flattening" of four dimensions into the fifth?

this is like a 100 level description of epistemology, everything there makes sense. which ideological frame do you mean?

Uh, we do the same thing brainlet.

Yes, actually. The 4th dimension is when timeline diffusion begins to perspire through resonance rather than matrix space. The 5th dimension is you have the phase selector for all those infinite timelines up to aleph null.

Attached: 1556929342029.gif (600x358, 870K)

>scientific inquiry
>based on the idea that knowledge and theory are not caused either by reality or by even a muddled experience of that reality
That part of the passage wan't even discussing epistemology anymore, and even if you take the second half as true (about a lack of causal relationship between knowledge and reality), the beginning of the argument is in favor of empiricism. That pro-empiricism thrust of the opening is the only justification for mentioning the scientific method, which it then contradicts entirely.

>When you try to out-(n+1)-tuple the other guy

No because machine learning is currently overhyped just like symbolic AIs programmed in Lisp were in the 90s

no shit you can feed an a.i. tons of info and have it spit it back out but philosophy isent plug and chug mathematics its expounding on said ideas making something of them

wait, do you think knowledge is caused by reality? do you think theories are caused by reality? knowledge and theories are a part of epistemology, not ontology, which is why they often disagree with one another; the process of finding these disagreements is scientific inquiry. the person was not making an argument for or against anything, they were giving the sort of answer you would find in an encyclopedia...

i do not understand that kind of thinking because i lost my membership to the human race a long time ago.

Attached: a5b887d8084366a7507370f5e6ed1e71.jpg (374x347, 39K)

It could be possible that current AI versions will expand into semantic understanding. But the key is that the symbols are connected to something else, a representation of the entities in the world, or beliefs or propositional attitudes.

Lol good one
Uh, no. You got anything to back that up, kiddo? Didn't think so. Take a walk.

>Propositional attitudes
Yikes! Sounds like you need some Quine in your life.

Sounds like you need to explain what the fuck you are talking about. Something an AI can't do ;)

It is not a question of my or your beliefs. Scientific inquiry is based on an assumption of empirically reasoned knowledge. That literally means that knowledge has a causal relationship with reality, in that reality, by some direct or indirect but consistent method, not only informs, but even leads to and perhaps creates knowledge. It is not a matter of whether you or I agree or disagree with this position. But that is the position of scientific inquiry. Inasmuch as the tract made qualitative claims about the relationship between epistemology and ontology that were not about the historicity of debate on these subjects, it is making an argument.

Sorry, just a little joke. Quine thought that propositions were just Plationic hocus pocus, and that statements were equivalent by logic alone and not by participating in some form with an independent existence.

Well, natural language has always been an imperfect and imprecise conveyor of meaning. The point is meaning comes not from the language itself but from the inferences that each agent using it infers from it. In AI there's no agent monitoring the language, there's just the mechanism that generates the language. It's all surface structure manipulation with no deep structure.

You can read a infinite number of different meanings into the same sentence depending on what context it plays out in and how it is used. Sort of like language games. Words only acquire meaning once they are organized into a larger circuit of actions, sensing, purposes, object manipulations, and contexts of the language user.

not all knowledge is true. people have "known" plenty of things that were wrong. plenty of our scientific knowledge will come to be proven wrong because of further inquiry. I think you misunderstand not only the terminology, but what "causal link" even means. if knowledge was caused by reality rather than scientific inquiry, what's the point of scientific inquiry at all?

>what is data?
Do you even know what our conversation is about?

do you?

no u

Attached: 1515076898970.png (643x533, 109K)

>do you think theories are caused by reality
yes...

Not yet, and perhaps not ever, assuming that it never has an understanding of the topics in question. The program in the OP seems to just be relating frequently related ideas through some database of papers. Without understanding the terminology at least in the way that we do, no true philosophy will be done. A problem that I see in the reply is how it jumps from the relationship between the terms "ontology" and "epistemology" and then uses a common bridge between them. The problem arises when we consider that it has an incredibly narrow view of this relationship, and the way that it expands upon it in the last paragraph does not sustain the question. The program fails to answer the question in any thorough way.

then what is the purpose of scientific inquiry?

>MOM!! NICK LANDS TAKING AMPHETAMINES AGAIN!

Is it fair to say that turning a key in the ignition causes a car to start? Do you realize that causal relationship is a distinct idea from immediate cause?

You just got lucky user. Most of the time it spews out garbage.

he knows

Attached: wew guys.png (219x322, 8K)

Attached: Capture.png (567x378, 49K)

cringe and bluepilled

To be fair it makes as much (or more) sense than most 20th century continental philosophy I've been exposed to.

>this is amazing
Mayo: what are you doing?
30 something: getting your Slurpee.
Mayo: why are you walking like that?
30 something: maybe its cause i'm wearing my slurpee?
Mayo: why are you so pissed off?
30 something: its because i wear slurry pants, and have the biggest dick i know what the word 'slurry' really means.
Mayo: that is what you mean?
30 something: well now that i'm all tied up at the moment, i'm going to be the first one in line to drink it.
Mayo: can you please stop that?
30 something: why are you yelling at me?
Mayo: sorry that happened! the other guy took the shit all the way back to the truck. sorry about it, but i hope your friends can keep you off of that asshole.
Mayo: he probably tried to get to me as well.
30 something: he sure did.
Mayo: well maybe i should have gone to him like this.
30 something: yep, well anyway, i might be pissing it in that

>Are people just really good AI? Obviously

not. There are other issues of human intelligence that we do not yet understand. We can build better computers and make them nicer as possible but do we also build better humans? For sure we must find better ways not to make them worse, but there may be alternatives.

I see no concrete solution, other than an improved human. It's not something you might be able to achieve with only your computer.

We will not succeed if we give up on human intelligence. We know that we can do better with other types of cognitive processes. We know that we can use computers to understand other languages, to make sense of data. But there's no question that humans make errors. In all probability humans will spend a lot of time on these kinds of problems because humans are not good at reasoning, but we cannot solve everything.

Best response coming through.

Attached: so there you have it folks.png (603x376, 52K)

yes, turning a key is an action like scientific inquiry, what would be absurd is to imagine is the existence of "keys" implies your car will start

AI is a bit of a fad

Are we making it schizophrenic?