Philosophy Hurts Your Head

The blog of a cranky Philosophy PhD Student from Newcastle, Australia.

It’s Udder madness

Posted by Dr Samuel Douglas on July 28, 2005

No, there is nothiong about cows in this post. Or is there? Although “May contain Cows” is a little more problematic than “May contain Nuts” you can never say for sure. Someting for the vegetarians to think about eh?

Not alot happening news wise in the Singularity stakes. Just the same old usually collection of people, most desperately trying to get there first, and thus ignoring other projects that interlink with their own. One example of this is the Cyc program, which is trying to create an A.I. with ‘common sense’, completely ignores projects such as The Semantic Web. Why is this important? Because Cyc’s designers have claimed that one day it will be able to harvest information from the web in an autonomous manner. The Semantic Web, which aims to make as much of the information on the web acessable, and understandable to machines as possible, clearly should be of interest to Cyc. But on their website, you would think that Xml never happened.

And so it goes on. Many philosophers of Language and Mind ignore the attempted developments in AI. All philosophers seem to ignore the rise of ‘The Singularity” as a, or even the Telos of humanity. Critique of all of these activities seems very thin on the ground. Are we doing the right research? Are the assumptions that this research is based upon sound or even plausible? Is it the right thing to do? (something that many researchers just assume) Could we stop it even if we wanted to? Or is it that the social/economic conditions under which we exist will never allow it to happen? Will capitalism be compatible with these events unfolding? If not what will that mean?

No one is asking these questions. And no one is trying to answer them.

To some extent I sympathise those with who argue that many questions like these are simply not important compared to actually trying to create human-level AI. But unless the foundations of the science involved are sound, you are not going to be building shit! If this is half as signifigant as some think it might be, do we really want a bunch of people such as the ASF being at the helm? Or is this all ethnocentric navel gazing; a bunch of white people arrogantly trying to reincarnate God into the MAchine while the rest of the world goes to hell around them?

Enough of this ranting and wallowing. I have important things to do, like learning set theory so I can use it to prove St Anselm wrong. I hope it works. They key is this: No Universal Set.

4 Responses to “It’s Udder madness”

  1. I agree with you that people aren’t giving AI and its likely consequences enough attention.

    I’m not sure what to make of the Singularity, but we don’t even have to look that far ahead. I think it’s plausible that, in 15 years, a $100,000 machine will be able to do what the average worker can do today. At that point, many employers won’t want us humans working in their establishments at any wage level greater than zero. We’re going to have to think about alternatives to traditional employment.

    At the same time, people need some sort of structure in their lives. Perhaps it’s lifelong learning. Maybe it’s a social credit system. I don’t know what the answer is. That’s to be expected, but someone should know.

    One day, if we survive long enough, we will integrate ourselves with machines. It is inevitable. We will not be able to resist the mental and physical attraction of technological enhancements. Who wouldn’t want to simultaneously understand every aspect of genetic engineering, high-energy physics, and the secret nuances hidden across the collected works of William Shakespeare? (Well, maybe not the Shakespeare…)

    The question is, what happens first? Do pure machines forever outpace us, or do cyborgs eventually dominate?

    And what would a cyborg future look like? When 99% of thinking is bean counting (thought-directed, automated lawn mowers, interplanetary terraforming machines, etc.), what will become of our emotional lives? After all, we can’t go around crushing planets just because we got up on the wrong side of the bed. That wouldn’t be Cricket. Maybe civilization will be like V’Ger, but with a spatially-small, almost irrelevant nugget of human utopia safely bottled up where it can’t hurt anyone.

    Okay, I’ll shut up now.

  2. Thanks for the comments. I just can’t help but feel that this is an area that is almost consicuously free of critique from philosophical (and most other) disciplines. Is this a viable and likely future? What are the dangers if it happens? What are the dangers if it doesn’t? Are we going to piss away our environment in the hope that the AI will fix it for us? (Something that might not happen).

    Too many questions and not enough people trying to answer them.

  3. michael said

    hello Nemo,
    I would just like to poin to your “udder”ness and comment that you tell me off for such outlandishness.

    on singularity I think it would be damned practical if everything was placed in a unity., But if it were, and I quite agree, we would seem to be reproducing God, along with the problems of universal sets. It’s jut that there’s only one of me, so I would like to feel like I can unify everything that I’m interested in. It seems however that my dream to be entirely me wil never come to pass. I leave you feeling dejected (how ambiguous).

  4. Aha, I see: “No one is asking these questions. And no one is trying to answer them.”

    I am, hence I am “no one” hene I am “nemo”.

    Michael, you have an unhealthy preoccupation with thinking ;-P

Leave a comment