rpuchalsky on November 30, -0001 at 12:00 am said:
“Dragons help you map the emptiness”. I wonder if that’s going to become part of “What Is Hitherby Dragons” later, or if it is only legend-true? I had been wondering in a previous comment about whether Martin might be a dragon, and I wasn’t quite sure how to distinguish dragons from other Hitherby entities. This phrase really does distinguish them from merins, say — dragons don’t help make sense of the world, they help to extend the world.
I really liked the rest of the story about Ink, too, but I’m going to have to think about this part for a while before I can think about the rest.
Egarwaen on November 30, -0001 at 12:00 am said:
I really like the Ink stories, partially because they’re so cool and surreal, and partially because Ink “deserves special mention, but won’t get any.” and, thus, might be important to the plot.
Also, I think that, so far, all the strange entities from legends have wound up in the canonical list.
Though I’m still not 100% sure what the purpose of the footsoldiers was.
Martin as a dragon does seem to make more sense than most other hypothesis about what Martin might be. But I don’t think he’s really helping to map the emptiness. Martin’s primary functions seem to be understanding and making the isn’ts ises – making the unreal real, the unintelligible intelligible. Not mapping the emptiness so much as giving it form – creating something from nothing. (Which, as you know if you’ve read philosophy, gave the Greeks quite a lot of headaches)
Martin seems, by his very nature, to defy classification. He doesn’t seem to be a God or a Mortal – rather, he seems to be a God who was a Mortal but has defied his Godly nature, thus why the Monster does not appear to have power over him. (As Monsters appear to have power over everything that can’t answer the question they pose – IE, all Gods and Mortals)
Metal Fatigue on November 30, -0001 at 12:00 am said:
Ink! Yay!
(not feeling very analytical today)
HedgeMouse on November 30, -0001 at 12:00 am said:
I find it very interesting that Ink is starting to meet GTTC players now, though in the beginning it was just her. I wonder if she’s getting closer to the Gibbelin’s Tower, or possibly the mysterious castle we hear about every now and then.
Also, if Emily won a dragon, where was it? Was Ink the dragon? She certainly is mapping something in her search, although whether or not it’s emptiness remains to be seen.
GoldenH on November 30, -0001 at 12:00 am said:
isn’t it important to do good even when it’s the wrong thing to do?
Scott Lutz on November 30, -0001 at 12:00 am said:
Thank you, Emily.
It matters that you popped all those demons.
mackatlaw on November 30, -0001 at 12:00 am said:
I used to be able to follow what was going on in Hitherby, but somewhere along the way I shifted too much of the analytical somewhere else. My mythic identification powers and secret-decoding abilities have been largely shelved or on back-order. Too much “common sense” and “navigate in the daily world” stuff is trying to occupy that space on the processing unit.
But Ink is just cool! I always wanted to explore places. I remember making a map as a kid of my favorite secret place behind a neighbor’s shed and hiding it behind a painting in my house. I always wanted to go somewhere else, but my places didn’t have people in them, not usually. They would have probably been pretty lonely if I’d actually gotten and stayed there.
So I don’t know what Ink is all about on the deeper and secret levels, or the personal symbology and language, or whatnot, but I love what she’s doing. She Explores. She Quests. She’s seeking the bottom level of hell, though why, I don’t know or I can’t remember. I want to buy more Hitherby books so I can figure it out, probably having something to do with the nature of suffering. Or I could reread the back posts and brush up on it! Yes.
I nominate her as my favorite tour guide for Hitherby.
Mack
mackatlaw on November 30, -0001 at 12:00 am said:
isn’t it important to do good even when it’s the wrong thing to do?
If something is the wrong thing to do, then it’s not good. A logical contradiction means that either your idea of good is clashing with someone else’s, or that you’re actually wrong.
I think.
Mack
GoldenH on November 30, -0001 at 12:00 am said:
good and bad might be opposites, but not all opposites negate each other. Can I blame this on the english language?
I think scott has the right of it :)
Graeme on November 30, -0001 at 12:00 am said:
“It’s short for Incompatible”
I’m wondering if Ink’s telling the truth. It seems better than “the color of her hair”, “because she loves her books”, “Incandescent Universal Love Catherly”, “Incorrigible” or “Incarnate Breath of the Void Catherly”.
Incompatibility might explain her drive to leave the world and find hell. Of course, we’re told in the first Ink legend that she has some misconceptions about her nature and destiny: this belief in her imcompatibility might be one of them.
GoldenH on November 30, -0001 at 12:00 am said:
or maybe she just doesn’t know what she’s incompatable with.
philomory on November 30, -0001 at 12:00 am said:
> isn’t it important to do good even when it’s the wrong thing to do?
If something is the wrong thing to do, then it’s not good. A logical contradiction means that either your idea of good is clashing with someone else’s, or that you’re actually wrong.
I think.
Mack
‘Good’ is the opposite of ‘bad’, not of ‘wrong’.
The opposite of ‘wrong’ is ‘right’
In general, if I remember my terminology correctly, situations/states of the universe can be good or bad, while actions are right or wrong. In which case, the question isn’t worded quite properly (if you want to be technical, which normally you don’t need to be). So which question are you asking?
1) Isn’t it important to do what has good consequences even when it’s the wrong thing to do?
or
2) Isn’t it important to do the right action thing even when it doesn’t have good consequences?
Even with those wordings, it seems a little strange, but in the end the two come down, more or less, to these:
1) Isn’t consequentialism, particularly utilitarianism, correct?
2) Isn’t some non-consequentialist system of ethics, such as perhaps one relying on agent-centered restrictions, correct?
Consequentialism is not very popular, but logically it’s actually very compelling, and consequentialist refutations of other systems are often quite strong.
I hope you’ll forgive the interruption. I recently received a philosophy degree and I just can’t seem to stop thinking this way.
GoldenH on November 30, -0001 at 12:00 am said:
when I said that, I was mostly pointing out the discontinuitiy between philosophical beliefs. In my world view, “good” and “bad” is with respect to moral beliefs, while “right” and “wrong” are questions of justice and law.
For an example of when something can be “good” and yet the “wrong” thing to do, take ghandi. Say someone threatens Ghandi, and all he does is protest peacefully; then Ghandi’s entire village is slaughtered for daring to speak out of turn. Morally, Ghandi is good: he refused to do violence because of his upstanding moral beliefs. From a state of natural law, he did the wrong thing; by refusing to take action against a threat to his life, he allowed the violence to continue. This example might have problems, but we’ll assume for the sake of the argument that the entire village was conscientious objectors, who all agreed with Ghandi that peaceful protest was the only good action.
philomory on November 30, -0001 at 12:00 am said:
when I said that, I was mostly pointing out the discontinuitiy between philosophical beliefs. In my world view, “good” and “bad” is with respect to moral beliefs, while “right” and “wrong” are questions of justice and law.
Well, you’re free to use that terminology, but be aware that it’s contrary to the the standard terminology that’s been in use in philosophical discussion in the English language for… lord knows how long.
For an example of when something can be “good” and yet the “wrong” thing to do, take ghandi. Say someone threatens Ghandi, and all he does is protest peacefully; then Ghandi’s entire village is slaughtered for daring to speak out of turn. Morally, Ghandi is good: he refused to do violence because of his upstanding moral beliefs. From a state of natural law, he did the wrong thing; by refusing to take action against a threat to his life, he allowed the violence to continue. This example might have problems, but we’ll assume for the sake of the argument that the entire village was conscientious objectors, who all agreed with Ghandi that peaceful protest was the only good action.
The classic example demonstrating the difference between the Right and the Good is killing in order to prevent a greater number of other killings.
We’ll assume that it can generally be agreed upon that all other things being equal, a state in which person X is alive is preferable to a state in which person X has been killed.
Consider a situation where there are 3 innocents, 1 villain, and 1 hero.
The villain is going to kill 2 of the 3 innocents. But, he tells the hero, if he (the hero) kills the 3rd innocent, the remaining 2 can go free.
A final situation in which there is 1 dead innocent is more good then a final situation in which there are 2 dead innocents. From a Utilitarian viewpoint, then, the morally right action is to kill the 3rd innocent. The right action is always the one which leads to be most good state of affairs.*
Alternatively, you might believe in a system with agent-centered restrictions. One such restriction might be “Do not kill.” Under such a system, the morally right action for the hero is to not kill the 3rd innocent. Sadly, I don’t recall what such a system would say is ‘good’.
Curiously, none of these specify who is a good person; I suspect that in general they are those that take right actions, to the best of their abilities.
EDIT: Now that I think of it, confusion is avoided, I believe, by saying that people are ‘moral’ or ‘immoral’, not ‘good’ or ‘bad’. I could be misremember, but it would avoid using the same adjective to describe qualities of people and qualities of states of affairs.
That scenario can be mapped onto the Ghandi scenario above, but I’m not going to do it right now; I have a job interview tomorrow, and need to go to sleep.
——
- It is a lovely fact that, in the standard parlance, the generic unit to be used when quantifying goodness is the ‘Utile’. If quantifying pleasure (as is done, for instance, when pleasure is considered to be the Good), the unit used is the ‘Hedon’. I love philosophy.
GoldenH on November 30, -0001 at 12:00 am said:
I took a bit of philosophy but gave up on it because it’s obvious that I was going to drive the teacher mad (I got an A, but still). As much as I like philosophy, I find it many subjects a bit too “meta” and have what I feel are good reasons for my vocabulary.. as wierd as it may seem.
I don’t know, it seems like it’s mostly a matter of terminology to me. I call good people Good, you call good people Moral, which one you choose is probably pretty arbitrary outside the philosophy community imho. but I do believe that there is a subtle disconnect between good/bad and right/wrong that allows an action to be both good and wrong or bad and right.
jenna on November 30, -0001 at 12:00 am said:
I am hesitant about any formulation of moral theory that uses the physical state of affairs as the basis for “the good” or “the utile.”
The world is a series of layers. It begins with the physical, and all physical states are essentially equivalent. To distinguish them is the realm of first-order meanings and perceptions and mental entities; this creates a specific limited world to replace a seething sea of atoms and physical laws. The set of possible mental entities is also essentially equivalent, and requires a second-order set of ideas to render it into meaning; this proceeds, although in general people do not seem to often think about how they think about how they think about the world.
I assert that if there is a meaning that is not itself reductible to a meaningless entity then it must exist outside these individual layers and operate as a general organizing principle.
Let me suggest that the reason rules-based utilitarianism exists is the lingering suspicion that utilitarianism is not utile. That when we shift from considering our actions to considering how we consider our actions, people get nervous that adopting utilitarianism might have a negative value. Similarly I suspect that when we consider how we consider how we consider our consideration, rules-based utilitarianism will prove inadequate; instead, we must default to “choosing the details of our moral philosophy according to such constraints that, if everyone chose the details of their moral philosophy in such a fashion, the greatest good would be served.”
This is an infinite process. It begins on the concrete level by imagining that one can stipulate the value of physical events—imposing a state such that, if everyone perceives the same physical world, the greatest good would be served. This proceeds upwards to stipulating the value of very subtle phenomena. But there is an invariant, which is that other people are capable of experiencing and participating in the good—the good events, the good actions, the good philosophy, the good metaphilosophy.
This suggests to me on some level, although I certainly would not consider it demonstrated here, that respecting others’ rights may be an intrinsic good—that it is possible that killing an innocent may be inherently unutile because it relies on the false notion that physical capacity engenders moral capacity, or, “anything you can do is potentially the right thing to do.” This has not been demonstrated; the villain’s actions may not open up killing as a valid moral action, any more than an equivalent “fly to the moon, or I kill this kitten” opens up the physical capacity of flight.
ADamiani on November 30, -0001 at 12:00 am said:
I am hesitant about any formulation of moral theory that uses the physical state of affairs as the basis for “the good” or “the utile.”
[Snip”>
I assert that if there is a meaning that is not itself reductible to a meaningless entity then it must exist outside these individual layers and operate as a general organizing principle.
Theist. :)
Actually, I find myself deeply uncomfortable with this line of reasoning (If I have understood you correctly), because, in seeming to divorce ‘the good’ from any material state, it seems to become arbitrary. As any material state can potentially be ‘good’ and thus the concept becomes meaningless. I’m not even clear that such a morality could be derived from within the system (though, of course, it could always be handed down from a nominal Outside, as higher law). If the consequences of our actions are irrelevant– if the villain’s actions do not impact upon our moral calculus– then we might as well go// reductio ad absurdiam// and say that, presented with a ‘kill one or I kill six point five billion’ secenario, the moral course of action is to do nothing, leaving us self-satisfied and virtuous as we look out upon the smoldering ashes of civilization.
I suppose it has to do with a concept of ‘greater good’. Even though, in killing one, one becomes the vector of evil, the resultant worldstate is more desirable- I wonder if your Robinsonian ethic, noted elsewhere, would define this as a maximization of the global level ‘joy’? Oddly, this begins to remind me of ‘life,’ defined as a localized abatement in the field of entropy enabled by an acceleration of the whole….
instead, we must default to “choosing the details of our moral philosophy according to such constraints that, if everyone chose the details of their moral philosophy in such a fashion, the greatest good would be served.”
This logic is Kantian, but I’m not sure I buy it. It presupposes that the moral action in an ideal situation (“if everyone chose the details of their moral philosophy in such a fashion”), with little apparent concession to the fact that we do not. It also presupposes that the same morality is appropriate for all persons, which actually looks like a fairly big assertive leap, albeit one my egalitarian instincts incline me to grant.
But there is an invariant, which is that other people are capable of experiencing and participating in the good—the good events, the good actions, the good philosophy, the good metaphilosophy.
This intriguing statement I did not follow at all. Would you care to elucidate?
jenna on November 30, -0001 at 12:00 am said:
I am hesitant about any formulation of moral theory that uses the physical state of affairs as the basis for “the good” or “the utile.”
[Snip”>
I assert that if there is a meaning that is not itself reductible to a meaningless entity then it must exist outside these individual layers and operate as a general organizing principle.
Theist. :)
Arguably!
Like so many things in life, it’s hard to tell whether it’s theology or computer science.
Actually, I find myself deeply uncomfortable with this line of reasoning (If I have understood you correctly), because, in seeming to divorce ‘the good’ from any material state, it seems to become arbitrary.
It’s not really a divorce. It’s more like marriage counseling to recognize existing issues in the relationship. ^_^
The simplest way to explain this is “one man’s heaven is another man’s hell.”
If your friend Suzabo was alive yesterday, then waking up tomorrow to find her body cold and dead—that’s bad!
But if she was already dead yesterday, it’s pretty normal.
And if she was dead yesterday, then having her show up alive today is pretty horrific.
Particularly if she’s all like “so yeah I was totally rotting and then like the worms were in my eyesocket and I said gross and gag me with an obol and this dead gig totally blows and I’m going to come back to life right now. Then I cleaned up the rot and here I am! … huh. It wasn’t very hard, you know. I bet the people who are staying dead just don’t like us living people any more.”
Whether a given physical state is desirable or not depends on how the people who exist in that state perceive it.
Put another way, if we’re all brains in jars experiencing a virtual world, then Utopia is still Utopia and horror is still horror—the only difference lies in the physical world.
It’s late and I’m having trouble pulling together the broader theme. Here is one last shot. ^_^
Suppose that we wrote down the state of the world as a set of numbers. It might have things like ‘Suzabo is alive – 0 / Suzabo is dead – 1’.
Do those numbers describe a good world or a bad world?
What if you forget whether Suzabo’s life was 0 or 1?
How is it different with atoms instead of numbers?
If the consequences of our actions are irrelevant– if the villain’s actions do not impact upon our moral calculus– then we might as well go// reductio ad absurdiam// and say that, presented with a ‘kill one or I kill six point five billion’ secenario, the moral course of action is to do nothing, leaving us self-satisfied and virtuous as we look out upon the smoldering ashes of civilization.
Yes.
To be fair, we would probably feel horrified, sad, and irrationally guilty.
But the reductio ad absurdum cuts both ways. If an unstoppable alien god comes to Earth and says, “Abandon your morality; seek only to hurt others in such cruelty and malice as you can manage; and if you are inadequate in this I will visit such hell on each of you as your kind could never manage.”
does that make morality a matter of malice?
Is morality in general an issue of doing what the person with the biggest gun says?
I think that there is a case for killing the innocent to save five billion, but it is hardly inarguable. If might does not make right, then why can might change what’s right for others? Can one consistently declare that humans are wholly disposable and value five billion human lives, or does that decision—that you may do anything you like to a human if it serves the greater good—implicitly declare that humans are no more than stacks of meat and data neither more or less valuable than any other molecules or information?
This intriguing statement I did not follow at all. Would you care to elucidate?
I will try to continue this this weekend or in the letters column. ^_^
Rebecca
GoldenH on November 30, -0001 at 12:00 am said:
This suggests to me on some level, although I certainly would not consider it demonstrated here, that respecting others’ rights may be an intrinsic good—that it is possible that killing an innocent may be inherently unutile because it relies on the false notion that physical capacity engenders moral capacity, or, “anything you can do is potentially the right thing to do.” This has not been demonstrated; the villain’s actions may not open up killing as a valid moral action, any more than an equivalent “fly to the moon, or I kill this kitten” opens up the physical capacity of flight.
To me, it’s quite a moot point. Ultimately since there is no demonstratable governing force, we must assume that each person is either good or bad. But since our judgement on if they are good or bad is not absolute, we have no choice but to allow them to assing their own moral judgements on anything they choose to do. Thus inaction and action are both good and bad… but which it is, depends on the one doing (or not doing) the event in question.
I think it’s fallacious to claim that just because we cannot see something, means it doesn’t exist. If something cannot be proved, it’s entirely valid to assume it doesn’t for convenience. However, outside of convenience one must accept that evidence can exist that allows one to judge another, regardless of if that person’s POV can be understood rationally.. though what action you take based on that judgement is up to the individual to decide.
rpuchalsky on November 30, -0001 at 12:00 am said:
“I will try to continue this this weekend or in the letters column.”
Uh oh. Can we prolong the thread until we at least get to
Rawls, before it is written about? I think that something pretty good could be done with the veil of ignorance, since it merges so well with concepts about reincarnation. (Actually, I’m planning my next Audience story vaguely along those lines, with Maya in a role rather similar to Rebecca’s — I’ve always thought there was a similarity between the process of creating or getting a new character for a role-playing game and the process of reincarnation. (Obligatory Dunsany reference: the short story “Usury”.))
The problem with discussions of basic utilitarianism is that people always come up with the “Imagine that someone tells you to kill innocents or more will die” thing. That’s because it’s dramatic; if fictionalized, it leads to something like a Harlan Ellison short story. It also almost never actually happens. We need no excuse to kill innocents; we do it every day through neglect.
ADamiani on November 30, -0001 at 12:00 am said:
> The simplest way to explain this is “one man’s heaven is another man’s hell.”
[SNIP”>
Put another way, if we’re all brains in jars experiencing a virtual world, then Utopia is still Utopia and horror is still horror—the only difference lies in the physical world.
It’s late and I’m having trouble pulling together the broader theme. Here is one last shot. ^_^
Suppose that we wrote down the state of the world as a set of numbers. It might have things like ‘Suzabo is alive – 0 / Suzabo is dead – 1’.
Do those numbers describe a good world or a bad world?
What if you forget whether Suzabo’s life was 0 or 1?
How is it different with atoms instead of numbers?
Yes, it is my contention that those world-states can be, if not strictly ‘good’ or ‘bad’ in absolute terms, than at least ‘better’ or ‘worse’ in relative ones. Not the list per se, but the worldstate it describes. Hm… are we going in the opposite direction in terms of constructed layers of meaning?
If I forget, or am otherwise aware of Suzabo’s life status, it has little direct impact on me. This contributes therefore to communitarian impulses and a species-wide sense of myopia– ‘who cares if all those strange people I’ve never met live or die?’. While it definitely matters to the total global value of ‘good’ it doesn’t appear to impact the local perception of it.
The brains-in-jars thing drags us toward Descartes and the Matrix, and a discussion of whether or not something is “real” or “true” in an absolute sense has significance. Eww. Skipping over that for now, in the case where experience is defined by a virtual world, then I suppose it is probably the the state of the virtual world that is the relevant issue (to the extent that this is something extrinsic to the state of the lower-order physical universe that underpines it).
But the reductio ad absurdum cuts both ways. If an unstoppable alien god comes to Earth and says, “Abandon your morality; seek only to hurt others in such cruelty and malice as you can manage; and if you are inadequate in this I will visit such hell on each of you as your kind could never manage.”
does that make morality a matter of malice?
Is morality in general an issue of doing what the person with the biggest gun says?
Does morality, like political power, flow from the barrel of a gun?
Generally not. But in both of our reductionist examples, the gun has some measure of say-so.
I presume that for example purposes, the alien god is essentially a black box? We must therefore infer our inability to successfully resist, its perfect honesty and capability to measure our cruelty, and to inflict misery of such a scale and nature that it outweighs any potential “good” that might come from human solidarity and empathy. So faced with that situation, the moral course of action would be to attempt to ensure that level of cruelty was as low as possible above the threshold not to trigger alien intervention– manifesting their kindness and morality through their acts of malice, leading us into ugly paradoxes. If we assume that this black box has no threshold but will, rather, inflict sufferring in direct inverse proportion to our cruelty, then, OK, acts of cruelty might be morally justified– but only because we have gone to some significant lengths to create a situation where our actions have the opposite of their normal effect. Faced with this, a sort of Bizarro-world morality would be appropriate– where kindness is cruelty and and morality, vice. Our actions aren’t moral due to being, by definition, members of the class of moral actions– they are moral because of the good that will (we believe) result from them. Helping little old ladies across the street is a good and moral thing to do because its intends to make the world a better place; if, instead, it resulted in little old ladies being flayed alive by our alien overlords, it would be somewhat less good.
I think that there is a case for killing the innocent to save five billion, but it is hardly inarguable. If might does not make right, then why can might change what’s right for others? Can one consistently declare that humans are wholly disposable and value five billion human lives, or does that decision—that you may do anything you like to a human if it serves the greater good—implicitly declare that humans are no more than stacks of meat and data neither more or less valuable than any other molecules or information?
I think one needs a metric for desirable world-states, a means of defining what we mean when we say ‘good’. In actuality, I think this is probably an extremely complicated and nuanced concept, but since that rhetorical slight of hand would nullifiy the entire use-value of this philosophy, I suggest placeholder is probably in order. Further, we may be able to avoid the disturbing implication of making humans indistinguisgable from simple sacks of meat by definining “good” (or our approximation thereof) // in human terms//— the maximization of ‘human happiness’, for example (you might, perhaps, prefer maximization of ‘joy,’ but, to me, this connotes a more intense but ephermal bliss).
So, Suzabo, as a living entity capable of human happiness (and presusmably not infinitely unhappy), is a net positive to the worldstate. Suzabo’s sudden death deprives the world of that, and negatively impacts my happiness because I miss Suzabo. Some research on the psychology of game theory suggest that we react more intensely to the concept of “losing” something than we would on an objective benefit analysis… therefore, once I have accepted Suzobo’s loss, it has less of a depressing effect on my net happiness. Suzabo’s continued death is therefore still a bad thing, but less bad than the sudden shock of her death– it’s “kinda normal” but a worse “kinda normal” than when she was alive.
The anomaly here is “Suzabo is suddenly alive again.” That’s not horrifying because “SuzaboDead==0;” it’s horrifying because someone just walked up to you and kneed you right in the ontology. It evokes horror because it undermines everything you know to be true and hints at unpleasant possible alternatives. Much negative happiness.
ADamiani on November 30, -0001 at 12:00 am said:
“I will try to continue this this weekend or in the letters column.”
Uh oh. Can we prolong the thread until we at least get to
Rawls, before it is written about?
Yay! Rawls! But I’d always seen the Veil used in reference to higher-order things, like law and social organization. Can it be successfully adapted to issues of personal morality, where the ‘I’ in “How should I act?” is already determined?
Hm. I suppose, if you got really good at thinking about yourself in the third person, and absenting one’s self from any sense of personal interest…. interesting!
– I’ve always thought there was a similarity between the process of creating or getting a new character for a role-playing game and the process of reincarnation. (Obligatory Dunsany reference: the short story “Usury”.))
That’s beautiful! I hadn’t thought of that, but it does make quite a lot of sense. A pity that in this analogy, we would have to be be random-rolled, because the metaphor of karma as character points rather tickles me.
Oh. Wait.
This is describing the old Marvel Supers game to a T, isn’t it?
*shudder*
The problem with discussions of basic utilitarianism is that people always come up with the “Imagine that someone tells you to kill innocents or more will die” thing. That’s because it’s dramatic; if fictionalized, it leads to something like a Harlan Ellison short story. It also almost never actually happens. We need no excuse to kill innocents; we do it every day through neglect.
Yes, and, indeed, we act immorally in so doing– it’s a local perception/global problem. You wouldn’t ignore someone about to fall off a train platform, or drowning in a river, but we ignore those more distant from us all the time despite equally dire need and ability to intervene– because their existences don’t affect my perception of the total “good” in anything but the most remote and abstract sense.
Also, it’s almost impossible to live by the exacting standards a utilitarian philosophy would seem to demand, and people, quite understandably, just want to live their lives instead of sacrificing them in pursuit of the collective wellbeing.
“Utilitarianism is hard, let’s go shopping!”
rpuchalsky on November 30, -0001 at 12:00 am said:
I think that one of the points of Rawls is that it is part of personal morality to support the formation of a social system that attempts to intervene in certain systematic ways, instead of leaving everything to individual perception.
The concept of obtaining points for prior play that you can apply to a new character is actually a fairly common one in role-playing game systems. However, unlike some systems of karma, the role-playing game systems generally let people choose how to use their points. Do people always choose wisely?
“There came also the soul of Odysseus having yet to make a choice, and his lot happened to be the last of them all. Now the recollection of former tolls had disenchanted him of ambition, and he went about for a considerable time in search of the life of a private man who had no cares; he had some difficulty in finding this, which was lying about and had been neglected by everybody else; and when he saw it, he said that he would have done [this”> had his lot been first instead of last, and that he was delighted to have it.” (Plato “The Republic” Book X, translated by Benjamin Jowett)
I could come up with similar quotes from the Tibetan Book of the Dead, but I think that you get the point. Sooner or later, a practical theologian is going to think about making a sociological survey of player choice in RPG character generation.
ADamiani on November 30, -0001 at 12:00 am said:
The concept of obtaining points for prior play that you can apply to a new character is actually a fairly common one in role-playing game systems. However, unlike some systems of karma, the role-playing game systems generally let people choose how to use their points.
Ah. So you would have it as point based, but employ some system of randomized allocation, because Rawls requires you to be ignorant, even of your CP total when behind the veil. Is there any actual game precedent for that sort of thing? So often ‘random’ and ‘point-buy’ are presented as antithical….
rpuchalsky on November 30, -0001 at 12:00 am said:
Not really, two different threads are getting mixed together. I’m just saying that Rawls’ veil of ignorance appears to be one form of a reincarnation myth, and so do many RPG character creation processes. And that it could be interesting for transfer to go in any direction among these three (philosophical, religious, ludic) types of ideas.
ADamiani on November 30, -0001 at 12:00 am said:
[quote=”ADamiani”“>
This intriguing statement I did not follow at all. Would you care to elucidate?
[quote=”Rebecca Borgstrom”“>
I will try to continue this this weekend or in the letters column. ^_^
Rebecca
Was this ever followed up on? I had been rather looking forward to it, and missed it if it was ever written. This discussion on deontology versus utilitarianism has been haunting me for some time.