Saturday, October 17, 2020

Why Ripto Rages: a Supercharged Argument. (OUTL!NE.)

Problems with Ripto’s Rage:

-    Introduction: Why I am not Biased by Nostalgia.

o  Why you might suspect that: Bias for the Past.

§  The first game for PS1 I owned.

§  Played on and off for twenty years.

o  Counterargument One: Games older to me than Spyro, of which I am more critical:

§  Sonic the Hedgehog 2.

§  Sonic C.D.

§  Wario Land.

§  King’s Quest VII.

o  Counterargument Two: Games newer than Spyro 2, for which my nostalgia is stronger and my admiration greater:

§  Jak and Daxter trilogy.

§  Ratchet and Clank trilogy.

§  Sly Cooper trilogy.

§  In all three of these instances:

·     I finished the trilogy, whereas I never acquired Spyro the Dragon: Year of the Dragon until I got Spyro: Reignited. My sister didn’t even know that Year existed.

·     The success of the first sequel was what prompted me to buy the third installment, and in two out of these three (Jak and Sly) I ended up preferring the second to the first game, in spite of fond feelings and intrigue for the first, personally and critically.

o  Who I was when I first played Spyro.

-    Fewer Worlds makes Backtracking Awkward.

o  The Ladder in Glimmer.

§  Moneybags is right there. Greed is no excuse for the inconvenience.

§  Not much to be found up there anyway.

§  The Balloonist Routine: How distant ledges used to work alongside quotas.

·     Why it works, especially in Magic Crafters.

·     The New Alternative:

o  The Ladder in Autumn Plains.

o  The Vortexes.

§  Why can’t I just fly? Compromising player agency when it’s not necessary to even implement that skill here.

-    Acquired Abilities.

o  Swimming underwater with no need for an air supply: no mean feat for a dragon who could not touch water in the first game.

o  Climbing, as discussed.

o  Head-smashing.

o  Abilities as Currency.

o  Perma-flame when you need it least.

-    Temporary abilities.

o  No faeries kissing you.

o  No arrows on the track.

o  Only one (obnoxious) combination, in Metropolis, when you’re already tempted to leave.

o  Flying is less rewarding than in the Flying Levels.

o  Point System is demoralizing: kill count in a kid’s game.

o  Idle springs.

-    Getting a Life:

o  Old Format:

§  Gnorcs are Gems in disguise; gems have been stolen, as were the eggs abducted by the thieves.

·     Sense in terms of lore.

·     Sense in terms of morality.

·     Sense in terms of design.

o  Kill Gnorcs, get gems, advance.

o  If a Gnorc has been killed already, it produces a Life Token.

§  Enough Life Tokens add up to a Life, like rings, coins, or wumpa fruit.

§  Life tokens indicate Gnorcs that have already been killed, marking territory that has been explored, like breadcrumbs or Ariadne’s thread. This relates to the presence of gems and crates as pathways to new territories.

§  Lives may also be found in cute chests with eyes. What’s inside? A Dragon-shaped Life. It all makes complete sense.

§  Health can be restored by devouring butterflies that are contained within the local fauna and fungi.

o  New Format:

§  Enemies may or may not be working for Ripto. In fact, several of them are simply combatants in a conflict wherein Spyro plays as double-agent, taking ample casualties for both sides in order to further his own objectives.

·     Breeze Harbour/Zephyr.

·     Metropolis/Robotica Farms.

§  When enemies are killed, they release white balls of light, most presumably their Souls, which appear to be pure (unlike the evil spirit who haunts the statues in Colossus, who is distinctly dark and alters the colour of the statues it inhabits). No sooner do they give up the ghost than these light spirits are drawn into magic goalposts which harness the energy to fuel temporary power-ups for Spyro to exploit once he meets his kill quota.

§  Lives are only acquired by eating special butterflies, which most often appear in bottles. Whereas the original Spyro RESCUED beautiful, magical creatures, this one feeds them to his accomplice, not unlike Ripto does.

-    Puzzles become more obnoxiously childish and contrived, since they are no longer to be discovered as natural outgrowths of the environment.

o  Haunted Towers: Perfect for an Open World.

§  How I learned how to win.

§  Faeries, Supercharge, and the Knight’s Gauntlet. (Analogous to High Caves from earlier.)

o  Idol Springs:

§  Boxes, Tikis, and Shapes, oh my!!

§  Puzzles feel less relevant to the “Real World” because they do not require the player to be resourceful about the environment.

§  While some of these devices warrant explanation, most of the explanations are as simple as, in my own words: “this tiki lamp is a very picky pescatarian.”

-    Flying Levels stop when you find secret areas, even if you do not want them do stop.

-    Our “Villain” has a Reason to Rage.

o  All we know about Ripto is that he was accidentally transplanted from another world and that he decided to take refuge in Avalar, a home already to diverse species that often butt heads with one another, both figuratively and literally.

§  While Ripto ostensibly expressed a desire for conquest, (“Say hello to your new king.”) this is forgivable for two reasons:

·     A “King” is distinct from a “Tyrant”. The Dragons Themselves inhabited a Dragon KINGDOM, implying a certain regality whose splendor:

a.          Justifies the expulsion of Gnasty’s Minions in the first game, and

b.         Makes Avalar’s technocratic, multicultural mishmash of a battlefield pale in comparison: a World less worth fighting for, except as a means by which to return to Dragon Shores, where, inexplicably, the Gnorcs have either taken over or been turned into slaves. (Weren’t they gems just a minute ago?)

Ripto, apparently a displaced monarch, may be qualified to rule, but who stops him? Elora, because this threatens her power.

·     Elora’s version of the events is the only account we have, and she has shown signs of bossiness and bias already, especially towards Spyro, Hunter, and Moneybags.

o  When Elora demands that the Portal in Winter Tundra be deactivated, Ripto loses his path home.

§  What would Spyro have done? Just as Ripto does: plow ahead into the New World, lighting things on fire and occupying territory. Ripto’s conquest is nothing more than the same strategy that made the original game so engaging.

§  This also explains why the wealth of orbs that the Professor was using for his experiments has been displaced. It wasn’t the villain who stole it, unlike in the original game; it was Elora’s doing.

§  Considering that Ripto only has two henchmen at this time, one of whom is too slow-witted to understand the command “Go through the Portal”, there is no reason why the faeries couldn’t just store the orbs in one safe place, such as, say:

·     Glimmer, a mining colony, parts of which can only be reached by climbing or flight. (This makes me wonder how Gulp got INTO his Overlook to begin with.)

·     Colossus, a Buddhist Temple in the Mountains, far out of reach of giant dinosaur creatures.

·     Behind a forcefield in Hurricos, past a series of propellers that cannot possibly hold the weight of Ripto’s minions.

·     Aquaria Towers, an underwater city. (Can Ripto or his minions swim underwater? They don’t look it, though they’re not dragons.)

·     Fracture Hills, behind several feet of rock that can only be opened by Satyrs versed in the ancient art of bagpipe music, whose song can make even the Earthshapers dance on demand.

·     Zephyr, in a military barracks.

·     Breeze Harbour, on a flying ship, perhaps?

·     Scorch, where doors can only be opened by using a Superflame attack.

·     Shady Oasis, behind a grate that can only be opened by eating magic fruit and being a hippopotamus.

·     In a secret ice cave in Winter Tundra, or perhaps the Magma Cone.

·     Behind one of several cracked walls in Autumn Plains.

·     In literally any flying level.

·     Agent Zero’s Hideout. (Under supervision.)

·     Metropolis. Enough said.

o  Ripto only conquers Autumn Plains after Crush is killed in his own dungeon and Ripto and Gulp are forced to flee Summer Forest.

o  When Gulp, Ripto’s trusty steed and last surviving ally, is killed, he avenges the death of his minions by bombing the Portal that brought him here in the first place, devoting his remaining efforts to the development of mechanical surrogates for his fallen allies.

o  Ripto is simply the scapegoat.

§  When Gulp is killed, Elora and the faeries reward Spyro by turning Gulp’s Overlook (Gulp’s home and the setting for his demise) into a suntan parlour, crediting Spyro with bringing peace to Avalar while blaming Ripto for creating trouble between rival sects.

§  Presumably, two such warring sects are the Breeze Builders and the Blobpeople in Zephyr, who MUST be manipulated to reach this stage in order to acquire their Talismans, though no further Talismans appear afterwards.

§  There is absolutely no evidence that Ripto inspired such bitter rivalries. Judging by how quickly the professor tends to operate, how slowly Ripto moves in taking new territory, and how advanced the arms race is between the birds and the blobs, it would appear that they had been at war far longer.

o  Why does Ripto hate dragons? They do have a way of killing his friends and allying with any cute faun who works against him (and not just Elora, as we observe in Fracture Hills). This also explains why he tends to resent faeries and use them to feed Gulp.                              [({Dm.R.G.)}]

Friday, October 16, 2020

Human Sacrifices and Stripteases: Conventional Evil in Media and Why "Human Rights" are No Excuse for Censorship.

Every modern art form has had a Golden Age, and the most brilliant content produced within the medium has been the most controversial and provocative. Film, literature, music, theater, and the pictorial arts have all “pushed the envelope” with regards to their subject matter and its presentation.

Video games have notoriously been no exception to this, and attempts to censor them have been as diverse and inane as any other. Yet for all the best efforts of the medium’s founding members, it seems that no one could have predicted where the most dangerous threat would emanate: from within the Audience.

Gamers and designers have, by and large, “sold out” with regards to the normalizing tendencies of the millennial marketplace. We live in such an era wherein creativity must bow before commerce, and the laws of commerce have been worked out with such daunting bureaucratic accuracy that they have come to act as surrogates for ethics. This puts us in a precarious position indeed, for while we might be tempted to pride ourselves in “how far we’ve come” and departed from the “archaic” religious superstitions of previous generations, this has only made us susceptible to more absurd and arbitrary control schemes.

What are the effects? Most immediately: we see something which might be called a tendency towards making things “family friendly”. Whenever any art form becomes commercialized in the mainstream, it tends to become gentrified, sentimentalized, and made palatable and “wholesome”. There is, of course, nothing intrinsically wrong in such an aesthetic, and it certainly has its place, both therapeutically and socially… yet it has its limits, and to confine an entire art form to such limits is to put a tiger in a box.

Attempting to reach the “widest possible audience”, we initially target nuclear families. “Children’s films” become “family films”; cartoon animals are set against the backdrop of a narrative that not only inherits adult subject matter but that would be most recognizable only to the adults of the target generation of parents. Disney and Pixar’s Finding Nemo had no such nonsense; its genius was in its appeal to a primordial sense of wonder that is timeless and does not wither with age: the Oceanic State. DreamWorks’ response Shark Tale is another matter. Hardly as subtle nor as profound as its predecessor Shrek, a truly inspired take on faerie tales, many of whom were originally quite gruesome on purpose, Shark Tale was a shameless attempt to cash in on the undersea market in an iconically DreamWorks fashion, complete with the voices of Will Smith, Angelina Jolie, and, of course, Martin Scorsese. The famed director of gritty, brutal crime films lends his unmistakably high-pitched voice to a comical pufferfish who navigates an underwater crime world that, in large part, is modeled unsubtly but tactfully after Goodfellas and Casino, as well as old-school rival The Godfather. How is this a “family film”? Because to kids it appears to be a “children’s film”; to parents, it makes light of DeNiro’s darker moments.

Already we can observe the dangers of the “family friendly”. Even in targeting the nuclear family, producers of popular media are all-too-often tempted to substitute adolescent defiance for childlike wonder, pretending the former to be more mature than the latter simply because it follows in biological chronology, as adolescents are also often tempted to presume. The consequence is that the films we show our kids, often the films we watch alongside them, always have an element in the background of that depravity which is typically sequestered to the mature rating.

The problem is aggravated when marketers observe the modern family. It’s become a matter of common horse sense that a large, probably growing faction of the market does not conform to the tradition of the nuclear family. The adult animated series BoJack Horseman uses this to both great commercial and artistic affect; Raphael Bob-Waksberg’s postmodern fable uses animals and people to illustrate the lifestyles of unwanted children, sexual deviants, chronic divorcees, and the infertile, often poking fun with that heavy hand (hoof, or paw) that only the sufferers themselves brandish. Yet what spurs BoJack Horseman’s genius is its brutally honest attempts to buck the jockeys of modern media: Political Correctness.

The “Politically Correct” is essentially the “family friendly” for grown-ups. Most significantly, its target audience is a class of “disenfranchised consumers” (a paradox peculiar to countries with prosperous economies) who seldom reach the conventional standard for psychological maturity, owing to various extenuating circumstances. Thus, the game becomes to pretend that such a standard is nonexistent or illusory, to coddle the outlier and to make the target audience feel “safe” and “included”, “respected” but not “understood”. Upset any one of these prerequisites, and the curtain falls, the illusion of inclusion shattered. So essential is this strategy to contemporary commerce that it has even become an expectation, so that any sort of majoritarian marketing is ostensibly tantamount to Fascism.

No generation has bought into this as completely as the youth of the present day, and the media of the new millennium has been so saturated with this tendency, one that has haunted the backgrounds even more menacingly than the mobster references in Shark Tale, that we have by and large come to regard it as (though it were) an ethic. There’s hardly a popular form of media alive today which does not bend to the demands of the “underrepresented”: the all-too-common outcast.

So what’s the problem? In order to summarize the dangers, we must return to my generation’s most treasured art form: the Video Game.

When I first played the original God of War on PlayStation 2, I was not entirely sure why. I hadn’t played any hardcore hack-and-slash combat platformers since EA Games’ acceptable Lord of the Rings games, which at least had the advantage of incorporating elements of Tolkien’s books that had not made it into Jackson and Walsh’s film franchise (upsetting the prejudice that Electronic Arts is just a bunch of Nazis with no sense of respect for tradition). Truth be told, Ubisoft’s Prince of Persia: the Sands of Time had its fair share of fighting, but it was just choppy enough to render me averse to the genre, and were it not for Prince’s time-based puzzle gaming, I would not have endured it. The main quality that drew me towards God of War was a nostalgia for ancient Greek culture, most significantly its timeless mythology and that style of drama we call “tragedy”, which everyone from the Florentine Camerata who invented opera to Friedrich Nietzsche, Stanley Kubrick, and Brad Pitt have tried to recreate.

To put it in Spartan terms: this God of War did well. The story of Kratos was gripping, captivating, devastating, exhausting, intriguing, grueling, and fundamentally cathartic. None of this could have been achieved were it not so brutal, though perhaps Roberta Williams of King’s Quest would give Sony’s Santa Monica Studio a run for its money. What I knew to expect going into it was what I’d grown to expect from some of the imagery in Prince of Persia and The Lord of the Rings: “puzzles” would be less akin to Tetris and Bejeweled and more akin to positioning a human sacrifice for execution.

What I had not gleaned from reading the reviews was that game reporters would often expose the most damning plot details without a spoiler warning. With regards to that “human sacrifice”, it ended up being as significant for character development as some of Walt’s most unforgivable moments in Breaking Bad. The human sacrifice was far more than a mere ragdoll on a string, his execution far less “family friendly” than falling from a cliff in Wile E. Coyote fashion or, as DreamWorks had managed, getting conked on the head by an anchor. (Not to be confused with Newswoman “Katie Current”.) Before my eyes I watched one of Kratos’ unsuccessful predecessors get pushed up a slope in an iron cage, begging, whimpering and screaming for mercy, with each step coming inches closer to a device designed to burn him alive… and I was doing all the pushing and the immolating!!

How did Sony Santa Monica get away with this? It was obvious: God of War was a game for adults who loved Greek mythology and war games. Kratos had to be the sort of guy who would light a “loser” on fire just to “honour the Gods” and to advance himself; he was a Spartan warrior with a vendetta again Mars Himself. We’d already seen the remains of Kratos’ other predecessors decorating both the entrance and the interior of the gauntlet. We know that this will be our hero’s fate if he does not progress, surviving at all costs, including the lives of those he meets along the way, however harmless and friendly. The Gods demand it, and those Rationalists who are snide enough to challenge Divine Command Theory would do well to read a bit of Greek mythology before they spite a Greek Deity’s Will. On top of all of that: the deceptively convoluted series of death traps and challenges is quite linear. When the time comes for Kratos to light that guy on fire, he has no practical alternative; there are no side-roads, and this road goes only one way.

Thus we see the three core ingredients in edgy game design that pushes the envelope, heightens the Art, and makes our stomachs squirm: Cultural Context, Machiavellian Rationalization, and a Mature Rating that, like a good fence, a harness, or a dog leash, empowers to the same extent as it restricts. Already, too, we see the dangers inherent in this: boys who hear tell of these M-rated games get the distinct impression that this is “okay for adults” but “not for kids”. To the same extent as they wish to be “grown-ups”, they might play with matches and toy with the idea of arson, however flippantly. They recognize that “mature” material is “really bad”, but they also internalize the notion that it is considerably MORE bad than that which they see on Tom and Jerry. Lighting someone on fire is only “okay” in the context of Reservoir Dogs; dropping an iron or an anchor on someone’s head is relatively acceptable.

Thankfully, we need not busy ourselves with too much dispute about the effects of media upon the mind of the child, for most children operate under a significant amount of supervision. Yet what about those aforementioned “disaffected” adults who are stunted in their psychological development? When media becomes censored, a veil is cast over it. The stoppering of progress is akin to the shutting of a hatch, especially when a leak has broken out within its chamber. We were content to tread water as the water level rose, but now that the ceiling has been sealed we are to drown.

God of War, to my mind, was never intended to promote violence, nor even to glorify it, though Kratos’ fate does raise some questions on that point I shall not address herein. Like most games notorious for their “graphic” depictions of violence, (“graphic” by the standards of the Sony PlayStation 2 console) God of War produced a sense of release, achieving that sort of catharsis that Greek tragedy was known for, meeting its artistic goals. By pushing the envelope, it shocked us into a state of vulnerability; reminded of the brutality of history, we were pacified and grateful for modernity.

Yet what happens when that envelope is stopped? All of a sudden, that depraved, Dionysian psyche which is seeking new forms of shock value must content itself with trails already blazed. Producers begin to mass-produce those forms of evil which are proven to sell, given a long and growing blacklist of those forms of evil which are too “triggering” even for millennials. The shock wears off, as does the talk, but the material remains, getting recycled, over and over and over again, like the “consumable T.V. violence” of David Lynch’s Twin Peaks. For want of new, ingenious forms of catharsis, developers repeat the same old tropes: immolation here, hanging there, et cetera. All of a sudden, the new generation comes to regard THESE things as being “normal”. The modern adult is in the position of the aforementioned boy: so long as one believes one’s self to be indulging in a “permissible” evil, one worthy of an audience, one not QUITE “as bad as” a less conventional evil, one can content one’s self in feeling “normal”.

Most people, clinically, make decisions MORE based upon projections of normalcy and abnormality than they do off of Good and Evil, especially when the latter distinction is so difficult for modern scientists to quantify, whereas the former is all too easy. It is for this reason that practices such as mob lynchings and dragging men behind pickup trucks were so prevalent for so long in the United States (though let us not forget that equally violent forms of anarchist protest against Police Brutality are no better, by the same token). Even in fighting evil, we commit conventional evil, and most often it is against the unconventional evils that we express our own evil impulses. Ergo, the rapist is everyone’s favourite scapegoat, the racist following closely.

That which is a marginal danger for the boy becomes an imminent problem for the adult. Because we KNOW that we are adults, we are less likely to suspect our own susceptibility to suggestion. If I produce games, those limits which I must observe in publishing are presumed absolute. A boy aspiring to play his older brother’s fighting games is not in the same position; he understands the relativity of restrictions based upon age. Yet such a sense of relativity disappears in a globalized marketing standard that might very well be enforced by the United Nations as a Human Right. If I cannot put rape into my game, though I can put immolation, the line I observe is presumed to be absolute; if NO one is allowed to see the former but the latter is acceptable, I internalize and outwardly affirm this ethic: rape is absolutely bad; arson is relative. What is to stop a vigilante, then, with my same, universalized conviction, from setting an alleged rapist on fire? Not much, and let us not pretend that human beings do not have a history of burning people alive under suspicion of evils that are not conventional by accepted moral orthodoxy.

It is this combination of forces – a stagnating form of generic tropes on one hand and a patronizing imposition upon the adult conscience on the other – that poses the greatest threat to the millennial Zeitgeist. The irony would not be lost on the great critics of modern media such as David Lynch and Raphael Bob-Waksberg. When Diane tells BoJack that television “normalizes” behaviour, this is only partially true. Media in itself does not normalize behaviour, until media itself is normalized. So long as media remains a forum for free expression, it keeps innovating ways to shock us into our catharsis. Yet the moment that it is met with a bureaucracy which treats its marketing interests as moral imperatives, having won a generation of loyal consumers who act as a mob ready to lynch anyone’s career, (or worse) media collapses in on itself, and that which was previously “daring” and “provocative” becomes the new “normal”. This is aggravated by the peculiarly millennial tendency for forgetfulness.

Critics in defence of political correctness might cuttingly contest that normalcy is nothing new in itself, that even the classic Greek plays reused tropes, yet “society” continued to “progress” and “mentally healthy adults” were not afflicted. That would be well and good, but for several bugs: in the first place, those who were NOT regarded as “mentally healthy” all too often saw a side to their “healthy” social superiors that those same “adults” would have been loth to see within themselves, hence the scapegoat gives the “tragedy” its name. Furthermore, to speak of “society” as though it were one entity and “progress” as though it were culturally universal is a conceit only imaginable to modern Westerners. The integrity of Greek “normalcy” lay in its adherence to tradition, one peculiar to a Greek society. To be born Greek was to live Greek; Kratos makes his deal with a Greek God (one indigenous to Greece, apparently, for Kratos later meets the Norse Gods in Scandinavia) in the context of a war against the “barbarian” non-Greeks (some of whom were likely those same Scandinavians).

Yet this is not our current “society”. The globalist culture can no longer accommodate the nationalism or communalism of earlier societies established upon peculiar traditions, and all such traditions become secondary to “Universal Human Rights”. To be born American in the nineteen-nineties is not the same as to live American in the year 2020, and few of us will be able to repeat the sins of our fathers nor to attain those goals we set out to attain as boys (though perhaps the girls will adopt new dreams to make up for the boys’ loss, since these dreams may be won at the latter’s expense).

Political correctness operates not like a temple but like a marketplace, for it was within the market itself that this disease was born. It is an ethic for consumers and those wishing to sell to them. In sales training, we learn that people like to buy but not to be sold to. The most effective sales agencies are those who manage to sell your desires TO you, and if they should disappoint you, it is because you are no longer the target demographic, and salespeople go where the money goes. Millennials who preach Human Rights and Political Correctness are simply the most pious consumers of conventional evil, and their enemies are simply those whose evil and whose virtue are too abnormal to be advertised. So long as traditions are the preferences of an older demographic and a regressive society unfit for the global market, we do not question the Present.

By and large, the universal rating used by Designers to regulate one ANOTHER is “N.S.F.W.”: “Not Safe for Work”, a fitting slogan for a generation of corporate guinea pigs turned into automatons on a production line devoid of either voice, dignity, or imagination. The implication is that nothing is too “mature” which is permissible and “correct”, yet it may not be the sort of game that your boss would want to catch you playing. (Perhaps supervisors would rather see you spending company time on Tetris?)

One game Not Safe for Work (or Home, probably, though when has that stopped anyone?) is House Party, a raunchy dating game set among a group of young singles wherein the male protagonist (female coming soon!! No puns intended.) manipulates nonplayable characters, mainly women, into various sexual and suggestive acts. Several times, voluptuous girls presumably in their twenties are exposed to the elements by avenue of blackmailing and deception.

None of these achievements further the story, usually. By contrast with God of War, the game is very admirably nonlinear. No Divine Authority mandates a sacrifice of clothes or dignity; the protagonist rarely needs to compromise another character in order to “advance” towards a teleological destination. One might therefore suggest that the sexuality in House Party is “gratuitous and male-gazey”, as Gina Cazador says of her own role at the start of BoJack Horseman’s fifth season and the first season of the show-within-a-show Philbert. Yet it is precisely BECAUSE these indulgences accomplish nothing that they are so tempting, so reflective of the mind of the player, and so cathartic for players who would NOT do such things in Actuality but who might harbor a vendetta against those who do, a vendetta which is often irrational and subconscious, much like Diane Nguyen’s media crusade against Hank Hippopopalous.

Most of us would sooner have our flesh seen than seared; there ought to be no doubt that lighting someone on fire is worse than coercing that person to take off her pants. It is precisely BECAUSE God of War rationalizes acts of utter depravity that it seems relatively noble, and it is precisely BECAUSE a striptease is an indulgence which more civilians enjoy that we invent so many rules for its proper context and we harbor such antipathy to those who overstep those bounds; they mirror our own perversions, as the success of games like House Party evidences beautifully. Yet even if one evil were tantamount to the other, (by extremely puritanical standards) what is most important is that BOTH be allowed to exist, to perform their respective dances, and then to waltz offstage in the manner of the BoJack Horseman penultimate episode. Each form of shock, however gruesome or uneasy, serves a vital human function, both psychologically and socially, and so long as it does something new for the medium it furthers the Art. Place even one expressive form in a cage, and the Art ceases to be an upward striving and degenerates into a brutal, fatalistic sacrifice.

[({Dm.R.G.)}]

Wednesday, October 14, 2020

Choice, Player, Designer: an Open Response.

This is an intriguing question you have raised, and the answers you find may be even more intriguing yet. In addressing it, I hope to resolve some of my own quandaries pertaining to the relationship between Choice and Game(s).

The simple question of whether or not to program a menu in a visual novel is, at heart, the question of whether or not to give the player a choice. Yet to give the player a choice or not to give the player a choice is in itself a choice on the part of the designer. Furthermore, to address one’s fellows for advice with regards to this choice implies another question: which choice OUGHT I to make? It is inescapably a moral question then, regardless of what moral scheme one subscribes to: how OUGHT visual novels to be designed?

One might presume, given this summary, that the answer might only be found where the question originated: within the designer. Since the designer functions as the “dealer”, he* holds all the cards; with him, the game begins and ends. It would apparently follow, under such a paradigm, that designers must decide the matter for themselves, by arbitration, that the question can have no definite, “objective” answer, and that by asking it one only invites a makeshift anthology of case histories and partisan, anecdotal accounts.

Yet this is not necessarily so, since such a paradigm (beginning and ending with the designer and his “Vision”/will) in itself is tragically one-sided, and one mustn’t forget that, just as the designer starts the game and the player finishes it, the answer to this question of design might properly be sought in the Player. Yet since one designs games for multiple players, most of whom are unknown to one’s self, this “Player” must be regarded not as an individual, but rather as a concept out of which the EXPERIENCE of the individual “player” (and, as such, the individuality of the experiencer) emanates.

 

*The generic “he”, for brevity, meant to imply any number of identities, chosen or given. I will employ more traditional pronoun usage for this text in the spirit of maintaining clarity and concision, without any ideological meddling or personal involvement.

 

When one asks, “should the player be given a choice?” one really enters into a complex, often self-referential and circular, yet thoroughly symbiotic and beautiful series of inextricably contingent relationships, mostly predicated upon one trinity: the Player, the designer, and the Choice. The relationship between Player and designer, as well as how each relates to Choice, circumscribes the entire inquiry.

In order to determine what choices to give unto the Player, the designer might start with a moral prescription for the Player; in other words, the designer asks himself: “what sort(s) of choice(s) do I want the Player to MAKE?” An undertaking of this sort is by no means pretentious for a genre which owes so much to both storytelling and adventure game design; both media have rich traditions of “educating” the player, and while the adventure game educates the player with regards to the internal workings of the fictional World, the story serves to educate the listener/reader/observer (the “audience”) about the external, “nonfictional” World.

It is precisely the combination of these two Worldviews that puts decision-based narrative games at the cutting edge of game development as an Art Form, as well as placing this Art Form at the forefront (the “avant garde”) of modern storytelling. It is also this delicate balancing act between the development of a Fictional Universe and a Relevant Allegory that produces most of the exciting and frustrating challenges unique to the development of choice-driven narrative games. The term “ludonarrative dissonance” is perhaps one of our generation’s subtlest contributions to the academic lexicon, and its scope reaches far beyond the mere conflict between “reading”/“listening”/“attending” and “having fun”.

When the designer has chosen what sorts of choices he wishes for the Player to make, according to the story’s “moral”, (i.e. its ethical message) he has begun to decide for himself what sorts of choices to make in creating choices for the Player. Yet how does one even choose what sorts of choices one wishes for the Player to make? Before we address this problem, we must first illustrate the nature of moral prescription in a game and its effects upon gameplay and narrative.

When one says, “I wish for the Player to make choices of this kind”, one creates the necessity for the Player to develop “agency”. At this point, the menu becomes necessary, except insofar as the player controls other factors, such as the pace at which he or she reads the text or the volume at which he or she listens to the soundtrack, both of which are normally of no consequence to the narrative, but only to the subjective experience of the narrative on the part of the Player.

A menu represents a branching in the narrative: on the most fundamental, brute level, it allows for the story to go “one way or another”. Some menus may create the “illusion of choice” by allowing the Player to exhaust secondary options within a closed loop until the primary option is selected, advancing the story only once the “right” choice is made. Yet such a structure does not in itself present the Player with a moral choice; the Player must simply find that answer which fits into an existing, deterministic framework. Such choices may ultimately be of no consequentialist value; they may have “consequence” for the player’s “experience”, (both in the sense of accumulated knowledge and immediate encounter with the media) yet in and of themselves they do not impact the development of the story and its characters. However, such a structure, like many other menu structures, may INCORPORATE an element of consequentialism by introducing hidden mechanics, such as a point system, a counter, or various flagging variables, which serve to ASSESS the Player’s choices as they are being made, usually without the Player’s conscious knowledge. For example, if the moral of the story is, “always tell the Truth”, the designer might confront the Player with a series of dialogue options, only one of which expresses the Truth of the Situation, and to such an extent as the Player AVOIDS telling the Truth by selecting other options from the menu, to that same extent a secret counter might assess the Player’s “Honesty”, quantified as a variable, functioning as a hidden “character stat” that might very well trigger a later event and thereby alter the course of the game, with or without the Player’s conscious knowledge, either “at the time” or “ultimately”.

In this manner we have established that the proper function of a menu is to create choices for the Player by providing the Player with options, many of which are alternatives to the “right” choice(s) which the designer desires for the Player to make. Yet of what relevance is this to the choices which the Designer makes, and how do we, as designers, choose them? Rather than imposing any binding moral law upon this discussion, at least so early on in its development, I should rather direct your attention from the relationship between Player and Choice to the relationship between the Designer and Choice as well as its inextricable corollary: the relationship between the Designer and the Player. At this point, you will notice that “Designer” is capitalized in the manner that “Player” came to be capitalized, since I am now referring not to a hypothetical individual but to an entire Category of hypothetical individuals, specifically with respect to the Category of Player and the phenomenon of Choice which manifests in various “choices”.

In deciding what sorts of choices one wishes for the Player to make, the Designer seeks to determine what choices the Player ought to make. Yet in order to make such a sweeping generalization as would warrant such a prescription the Designer must exist within a set of his own moral dictates. These are not necessarily “his own” in the sense of them being “of his own choosing”; in fact, any designer who publishes must note the limits of his own arbitration with regards to the content which he produces. Yet it nonetheless represents an intimate and often personal sphere within which the Designer navigates: a sort of middle ground between Vision and Social Necessity. The Designer himself relates to the Choice insofar as he is himself a moral agent and a “player” within the social game of Game Development.

Usually, the foremost secular appeal with regards to the Designer’s moral obligations concerns the relationship between the Designer and the Player. Egalitarian thought dictates that the Designer, as the Author of the Work, ought NOT to prescribe any ethic for the Player to follow which the Designer does not treat as binding also upon himself as Designer. Yet in this instance egalitarianism amounts to a stultifying dogma. Most notably, the relationship between the Designer and the Player is NOT one of identical reciprocity within the external World. The Designer often produces the work at the expense of his or her own resources, and the work is typically made available for a price that is arbitrated by the Designer and/or his conspiring distributors; the Player, on the other hand, expends no labour towards the development of the work, (usually) but must part with some sum of money, most often acquired by other means of the Player’s own choosing, in order to purchase the work. In this process, both Designer and Player are certain to expend only one resource: Time, and even that varies between Designer and Player, as well as from player to player and from designer to designer, depending upon innumerable and often incalculable and inestimable factors.

It follows, therefore, that in the external World outside of the microcosm of the fictional story and its virtual Reality, there is no a priori necessity for an egalitarian relationship between Designer and Player. Be that as it may, however, there remains the danger of the Designer becoming guilty of Hypocrisy. While neither the external World nor the virtual World** find the Designer and the Player on equal footing, (**for the Designer does not have to “learn” the game, typically) in the relationship BETWEEN the two Worlds a moral obligation persists. This is because, as mentioned previously, the moral of a story must serve an allegorical function which converts the content of the microcosm into a prescription for the macrocosm.

If, for instance, the Designer develops a game that can only be won by gathering certain privileged information, and if such privileged information can only be acquired by illicit means, and should such a crime stand to benefit the Designer at the expense of others, then the Designer has made an immoral choice by multiple if not all moral rubrics. Yet even if such a crime is not INTRINSIC to the design of the game, games may nonetheless present the Player with a moral prescription which he or she only ought to follow if they are so binding that the Designer ought to follow them as well, especially within the development of those games. To revisit an earlier example: if the moral of the story is to “always tell the Truth”, and if it is only by adherence to this principle that the game may be won, and if it is only by winning that the game may be enjoyed or “fully experienced”, (presumably: the Player’s intent for spending time and often money on the game) and if the game itself is programmed in such a manner, as illustrated previously, as to implement this system, then it would be poor form indeed were the Designer to lie to the Player either within the game or outside of it, especially in selling it.

Yet here, too, egalitarian ethics present a dogmatic imposition upon the Art, for there exists an entire pantheon of reasons to put the Player in a situation which one does not inhabit. For instance, if one wishes to illustrate a World wherein stubborn adherence to rules ultimately leads to despair and absurdity, a Designer might justifiably offer the Player superficial rewards for obedience with very long but binding strings attached. The challenge here is for the Player to look past what appears to be the prescription and to fathom the UNDERLYING message. Games like Portal and Braid, while they are narratively deterministic and therefore not “choice-driven”, per se, are excellent examples of this sort of subversion of indoctrinated common sense, and since such common sense remains within the external World, their designers arguably do a service by deliberately manipulating them.

We might tentatively conclude, therefore, that while at first it would APPEAR that the Designer’s relationship to Choice is contingent upon the Designer’s relationship to the Player, in fact the opposite seems to be the case. To some considerable extent, the Designer chooses the sort of relationship which he wishes to have with and to the Player. This falls under the purview of the Designer’s Vision. Whether the Designer is to function as a satirist, a master, a comrade, an investigator, or any number of available social roles, it is up to the Designer to CHOOSE which role to assume as both storyteller and Worldbuilder.

This brings us back to the final rung of the triangle: the relationship between Designer and Choice. While it was posited that the answer to the question of the use of the menu might be found in the Player, and while this position remains valid and essential, it would appear that we conclude our preparation for this quest on that rung of the triangle which excludes the Player: the Designer’s relationship to Choice.

It has been established that it is the Designer who chooses to determine the nature of the desired relationship to the Player. It follows logically from this conclusion that the nature of those sorts of choices which the Designer prescribes for the Player to make is contingent upon this arbitration; the Designer chooses how to “approach” the Player, and by that same token, often as a consequence, it is the Designer who chooses how the Player “ought to” play the game. This goes beyond merely an idealized projection of what an “infallible” Player would do, for it accounts also for every element of the Player’s experience and condition in making various “errors”; as Will Wright puts it, most of the time the player is failing, and the “job” of game designer is to make failure a rewarding experience (though “rewards” might include “punishments”, by extension).

Presented with so many choices, the Designer has no choice but to confront Choice Itself: the phenomenon of Choosing, specifically the phenomenon of Choosing Ethically.

We began our discussion with the question of when we ought to “give” the Player a Choice. Yet this word “give” may be misleading, and we must choose how we are to define it. When one “gives” someone else five dollars, for instance, one surrenders it TO that other person. On the other hand, when one “gives” another person an orgasm, this is not mutually exclusive with attaining an orgasm of one’s own. Ergo, we have not yet extricated ourselves totally from the intimate relationship (vulgar puns intended) between the Designer and the Player.

In the former category of giving – that is, wherein possession for one party is inversely proportional to possession by the other, as is “agency” as giving in this sense pertains to Choice – the danger morally is that of creating a parasitic/“codependent” relationship wherein either the Designer or the Player is limited absolutely by the other’s freedom. Yet not all relationships of this category are inevitably parasitic. If the Designer surrenders agency to the Player but manages thereby to acquire scientific knowledge from the Player’s choices, the relationship is mutualistic; if the Player loses agency but consequently enjoys the story during a linear, deterministic passage, the relationship is mutualistic. If the Designer gains nothing by removing a pointless fork in the road but this choice does not cost the Designer any time that the Designer does not wish to spend removing it, then the relationship is commensalistic, insofar as the Player benefits from the removal of the inconvenience and the Designer neither benefits nor suffers an inconvenience in removing it. At every stage of Development, we see the relationship between Designer and Player already developing alongside the game, so that the external World always represents some aspect of the virtual World, and the relationship between Designer and Player in both Worlds, as well as the relationship BETWEEN the two Worlds, may be assessed according to fairly standard biological and ethical sciences and various other lenses.

Let us turn, therefore, to the latter category of giving: the relationship wherein both Designer and Player acquire agency. Suppose that the Designer wishes to tell a story which follows the outline of classical Greek tragedy. While some writers might insist that Aristotle was not a game designer and that his formalism does not translate neatly into gaming, defenders of the ancient Greeks might contend that the structure is as valid as the spirit in which we harness it. As pertains to choice, the story may be truncated in a standard three-act or five-act play, wherein each act is concluded with the Player-Protagonist making one crucial choice which impacts later events, consequentialistically. Up until that point, the Player is presented with minor choices of no apparently substantial consequence, yet these serve to establish player agency within a fatalistic framework, so that not only will the player be prepared to make such choices at the end of the act, but also the pivotal decision may be obscured by the proliferation of inconsequential choices. Disguising enterprises of great pith and moment within an interactive World is undoubtedly advantageous to the “realism” of the work, though it does leave a lot to arbitration. Yet in this both the Designer AND the Player are arbitrary; the Designer chooses arbitrarily how to design the options from which the Player just as arbitrarily chooses. Suppose, now, that the Designer should recontextualize these arbitrary options within a scoring system, as previously suggested. The Designer might thereby determine this scoring system to be the TRUE arbiter in the development of the Story. While its consequences might not be presented immediately or even shortly, ultimately the Player’s performance is assessed and the Player’s experience defined by this rubric. The Designer now gains the freedom to make SEEMINGLY pivotal decisions secondary to the story without the Player’s conscious knowledge. Yet such a choice does not rob the Player of agency, and the Player ends up learning MORE over more stages of revelation as a result. In such a situation, both Player and Designer learn about a nuanced World which develops alongside their relationship, and the bulk of this development may be orchestrated within the confines of the Designer’s private thoughts and plans.

We conclude therefore that the question of when to create choices for the Player is neither one for which there are no right answers nor is it one for which there appears to be any one set of right answers. To decide how best to approach this choice, one has to evaluate one’s own relationship TO Choice: one’s Ethical Convictions, from whence emanates the desire to form a specific kind of relationship with the Player as a Category and a specific kind of relationship between the Player and Choice, as manifested both within the virtual World of the game and its relevance to the external World as an allegorical story. Yet in the absence of definite guidelines a lot falls under the purview of Vision and Social Obligation: a relationship between the Designer’s intent as an Artist and the Designer’s role as a producer operating within a specific marketplace. This relationship, as well, ought to be subject to ethical guidelines not unlike those which govern all other relationships we have so far discussed.

Thankfully, we have also established that the most subtle and rewarding games for all parties involved tend to be those which, while they are largely developed internally, within the Designer’s “own mind”, represent a relationship to the Player which is evolving and growing. Since the designer typically only approaches the task of Design as the abstract Category of Designer, and since he approaches players as the abstract Category of Player, such limitations are hardly pretentious or damning.

[({Dm.R.G.)}]

Don't Listen to "Them": a Reaffirmation of Binary Formalism.

We have grown accustomed to using the “they” not in order only to signify a group of people or objects but also as a gender-neutral, singular pronoun. This formal error in grammar has been popularly, if illicitly, employed in instances wherein the speaker knows not the formal gender of the person in question, so using “he” or “she” would be presumptuous and using “he or she” would be unconcise. Yet it was only very recently that the grammatical error became formalized as a social convention, in order to accommodate the ambiguity created by the excluded middle: those of “nonbinary identity”. In this instance, “presumptuousness” was the vice we wished to avoid, and acknowledging the excluded middle as a formal category created the felt necessity for a pronoun that would include this preciously excluded minority. Yet was this truly a necessity?

Technically, “gender” refers to a “classification”, as do its cognates “genus” and “gene”. In all instances, “gen-” derives from the same root word as “genesis”, meaning “beginning”. Ergo, a “gender” is that which is so from the beginning, the “outset”. Since one “sets out” into the World without the consciousness of one’s gender, one’s gender is said to be “assigned” (given by signification) according to one’s “sex”. “Sex”, as a description of an individual’s properties, refers to that sect to which one belongs intrinsically, biologically. Ergo, one’s “gender” is literally a description of that sex to which one belongs at the outset of one’s lifetime. It becomes absurd to suggest that “sex is that with which one is born, but gender is that which is initially assigned but which may be changed at any time by the Individual’s own volition.”

Out of this stringent but faithful accounting of “sex” and “gender” there emerges an appreciation for the extreme formality and formalism of the classical distinction between “the two sexes”, known also, synonymously, as “the two genders”. To “have a gender” is to “belong to a sex”, and this can only be understood within the context of a formal dichotomy. This does not, of course, preclude the appearance of a phenomenon which is difficult to classify from the outset, nor does this mean to suggest that such a human being lacks “personhood” and ought to be referred to by the impersonal pronoun “it”. Yet it does imply that such a classification would be informal, as would all consequent arbitrations of “gender” for such a human being. Similarly, those who are born into clearly delineated sexes, which remains the overwhelming majority of cases, might decide to “self-identify with” the person of indeterminate sex by claiming to be “nonbinary”, and this choice would in turn produce an “experience”. Yet no such arbitration or experience would necessarily legitimize the excluded middle to such an extent that to continue to exclude it from FORMAL grammar would amount to an elitist majoritarian attitude.

Yet already contentions arise from within academia herself. Wittgenstein famously indicated that “the limits of my language constitute the limits of my reality.” From this seemingly inconvenient fact of nature would theoretically arise the ethical obligation to adapt language TO those Realities which ordinarily human beings do not visit and remain thereby ignorant of: those of, for instance, “nonbinary identity” and its “experience”.

Yet from whence do these Realities emanate? We have certainly VISITED such realities when we erroneously employed “they” as a gender-neutral pronoun, and we can conceive of such a Reality when we use “usted” in Spanish as though it were an object in the third-person, as its conjugation chart implies. Yet if we stumble upon a territory by formal error, is that territory “legitimate”? When Christopher Columbus mistook the Americas for India, they came to be known as “the West Indies”. Yet is this consistent with the views of those who use “they” as a singular pronoun, or are we referring to a demographic that predominantly resists colonialism and “progress”?

The fact remains that “they” has no primacy in the philosophy of liberal individualism. Perhaps some professor of Heideggerian phenomenology might have contended that Dasein is “they” prior to its possessing “properties” and “attributes” such as individuality, personhood, or gender. Yet while a leftist movement has pushed for “they” to become the normative non-binary pronoun, (that which “nonbinary people” ought to be described as, as well as that which ought to describe people of indeterminate gender, though they may not prefer to be referred to as nonbinary) and while this same movement has presumed upon “gender” as secondary to that “they” which “we all are”, it has nonetheless ALSO pushed for personhood and individuality to stand at the forefront of its cause, rejecting the “objectification” of “people” by means of “sexism” (though we have seen how sex and gender in fact PERSONIFY, especially within the realm of language) as well as the “depersonalizing conformism” of any “heteronormative tradition” which threatens “individual autonomy” (though to refer to people by default as “they” does far more to insult their actual, formal “individuality”). Ergo, the critique “bites itself”. Either we are all “them”, meaning that our personal feelings of deviant identity are inconsequential, or none of us are “them”, meaning that our language must continue to operate within certain limits to accommodate our individuality.

Finally: if we are not “predetermined”, bound inextricably to an assigned identity, but rather “autonomous”, then how can I be INHERENTLY PREDISPOSED towards a deviant identity? The limits of my language have thus become the limits of my moral universe as well, not merely the products of a “stupid” Natural Order. I can claim no entitlement for deviating from the “norm” of a formal dichotomy, for I could only do so BY MY OWN ARBITRATION. I am not a “victim” because I am “nonbinary”, since I have chosen this lifestyle for myself and must therefore accept all of the consequences, whereas none of those would befall me were I to “conform” to that identity which was “assigned” unto me. If I am not “them”, then I am always “me”.

[({Dm.R.G.)}]

Monday, October 12, 2020

After After Virtue: an Unflinching Critique of MacIntyre.

After After Virtue: an Unflinching Critique of MacIntyre. 

(Written upon Completing my Second Reading of After Virtue, though not the Nineteenth Chapter, for Obvious Reasons.)

MacIntyre’s crystallized perfection cracks under the hammers and sickles of those Marxist apologists and Nietzschean dreamers whom MacIntyre so translucently despises, but the bulk of the crumbling deconstruction of After Virtue is performed by MacIntyre himself, once his devotion to his dreams of public accountability and his eager resistance to any sort of Nietzschean or Sartrean detachment from Public Life compel him to “defend” his work with a far duller instrument than he employed for its construction: a useful tool repurposed into a weak weapon.

Hardly, if even, three years following the publication of the First Edition in the United States MacIntyre caved into publishing an ominous Nineteenth Chapter wherein, for the first time over the course of his comedic history of moral thought, the playwright himself appears onstage as a caricature, breaking the fourth wall of historicity in order to address the hecklers in his audience. He hardly breaches the top of the third page of this addendum without reducing himself to a stock character at the turn of the leaf: “Morality which is no particular society’s morality is to be found nowhere,” quoth the Learned Scot of Notre Dame, continuing to cite some of his most iconic examples as one-liners: “There was the-morality-of-fourth-century-Athens, there were the-moralities-of-thirteenth-century-Western-Europe, there are numerous such moralities, but where ever was or is morality as such?”

Yet even in writing that previous sentence I am forced to confront the fact that MacIntyre is no longer a Scot, nor has he been one for about half a century, and while the shelter of American Academia might allow this “something of an intellectual nomad” (quite probably his own words) to escape the evils of ethnic profiling, his own ethical aims do not, though they DO allow him the privilege of “doing as the Romans do” within the safe confines of an Institution that is every bit as much a product of liberal Individualism as is Nietzsche, if not far more so.

MacIntyre’s central sin is a contradiction which Aristotle would most probably have laughed at. On the one hand, MacIntyre claims that morality is entirely topical, particular to groups and times, devoid of that universality which thinkers as diverse as Kant and Kierkegaard equated it with, and to be recognized within the context of a plurality. On the other hand, (perhaps not the shaking hand) MacIntyre rejects pluralism, especially with regards to Individuals as Rational Beings.

Yet how does a “nomad” manage to avoid becoming an Ubermensch? Clearly, the chief advantage to leading a nomadic lifestyle within the “fallen” modern world is in that one needs NOT simply to adapt to whatever culture one finds one’s self within, thereby sacrificing that romanticized “continuity of narrative” which comprises After Virtue’s most beautifully crafted chapter, but rather one can avail one’s self of a variety of perspectives and use Reason to decide among them which path to take. Constancy is not lost but gained in such a postmodern wilderness, though the temptation towards inconstancy remains, more tempting than ever before, though more threatening and fearful to the wary and experienced. In offering us the long-lost boon of moral objectivity, how does MacIntyre justify his own subjective biases, biases which by his own definitions MUST be intrinsic to his lifestyle as a transatlantic immigrant? (Perhaps I should note that, in this respect, I feel for him, but only as a character foil.) How does a seasoned nomad “settle down” into the mandala-shaped enclosure of the modern Academic city-state?

Apparently, it is by seeking to subvert the cornucopia of admittedly irreconcilable cultures (“admittedly” by the author’s own admission) and subcultures of the Present Age to the authoritarian rule of one Greek alpha male, severing all ties with competing intellectual traditions, (Camus is mentioned only once in After Virtue, as an unflattering example, and he does NOT appear within the Index of the Third Edition, twenty-six years later; Deleuze, Derrida, and Foucault do not appear at all.) and casually commenting, with boyish naiveté, that it is not virtue but bureaucracy which inspires despotism.

Yet who is better suited to contend with bureaucracy: Nietzsche or MacIntyre? One defines himself entirely by his role as does a chivalrous knight; the other uses medieval writs for wastepaper. Camus achieves more by “charm”, Jung more by “intuition”, and Kierkegaard more by sweeping generalizations than does the bookish, cozy Alasdair MacIntyre, at least with respect to not only appealing to the moral conscience but empowering it in persons. Deleuze, even in rejecting personhood as more than a product of multiplicities, is far more thorough in his investigation of cultures, mythology, and the effects of both upon the internal psyche (thanks in large part, we might presume, to his co-author Felix Guattari) than the Scottish-American historian who idealizes most the Greek and Scandinavian myths, (pity that he and Nietzsche never met to drink to that; intellectual history might have turned out differently) so sequestering himself to those cultures which were directly touched by Aristotelian meddling (at times: an Alexandrian Conquest tantamount at least to Manifest Destiny, though MacIntyre seems to prefer the former to the latter, having tried the fruits of both) that his prologue still lists “Chinese and Japanese” as though it were “also” an alien category to be distinguished from “English, Danish, Polish, Spanish, Portuguese, French, German, Italian, and Turkish”.

Is such myopia not OBVIOUSLY the consequence of a pluralism which rejects all universal claims? Is the infantile idolatry of Aristotle via Jesus not TRANSPARENTLY a reaction to one’s own temperamental insecurities? When the natural conformist, sworn enemy of the Nietzschean Superman, can no longer feel safe leading a nomadic lifestyle, for his home is devoid of patriotism and continuity, what better course of action for him remains but to take up refuge in an academic convent, ally himself with Catholicism out of utility, (especially to appear “consistent” to his students, thereby preserving status and security,) to preach Aristotle to a world tired of it, awaiting the return of the Saviour whom academia will recognize? If Nietzsche cannot be credited for anything else, is it not for arming Jung with the language with which to expose such neuroses?

Who among us has not sinned in this way? Men look to powerful figures like Aristotle or privileged princes like Christ to save them when they find themselves in new and hostile territory. Failing to adapt, we seek to adapt our environment to ourselves; failing to adapt our environment, we seek someone else to adapt it FOR us. One needn’t even consider Nietzsche’s primary project: the rejection of Socrates, without whom Aristotle would have amounted to little. The last line of After Virtue even seeks to place the author in the position of a cleric working towards the recognition of this Saviour; tired of waiting for Godot, the embodiment of hope whose “absence says more than his presence” (as does the wandering Taoist sage whom Watts loved and who surely inspired Nietzsche’s Ubermensch) MacIntyre resolves himself to waiting for St. Benedict. Yet apparently St. Benedict is already among us, and it is MacIntyre himself!! Yet this is just humble enough to stop short of Nietzschean narcissism, for MacIntyre is not herein analogous to Christ; that is Aristotle.

[({Dm.R.G.)}]

Sunday, October 11, 2020

My Economic Ethic: Outline upon Conclusion of the Premises.

Implementing Marxism via Aristotle, Huxley, and Chomsky: how Shannon’s character arc on Lost enabled me to complete my economic ethic.

-    Why Andy does not owe Daryll anything.

-    Why Chuck does not owe Jimmy anything.

-    Why the Parks do not owe the Kims anything.

-    Why Sabrina DOES owe Shannon something.

-    How these four are all possible at once, according to a system combining virtue ethics and classical teleology with Marxist economics, (true conservatism and true liberalism, respectively) especially according to the following appeals:

o  Common Goods versus Individual Status.

o  Achievement as Social Duty versus Personal Entitlement.

o  Social Duty as Collective Burden rather than egocentric conceit.

o  Turning capital’s intrinsically individualizing nature to a force of Common Good via the Virtue of Generosity.

o  The elimination of inherited guilt as nonconducive to the Common Good.

o  The reaffirmation of inherited wealth as teleologically purposive, as legacy, loyalty, love, and license towards the actualization of positive potentialities, i.e. virtues and achievements.

[({Dm.R.G.)}]

Wednesday, October 7, 2020

"What It's Like" and Content of Character: a Treatise on Judgement.

One phrase that has really come to bother me yet amuse me is: “you do not know what it’s like.” The most obvious response is: “well, what IS it ‘like’? To what exactly do you LIKEN it?” After all: if the speaker might determine “it” to be “like” something OUTSIDE of itself, specifically wherein “it” refers to that same speaker’s phenomenal Experience, then by what authority can the same speaker presume that whatever external point of reference he has compared his experience to is NOT akin to MY experience as well?

Of course, the ironic turn of phrase is clearly intended to raise another question entirely: “what is the Ontological Nature OF the experience?” Yet in addressing such a question one must remember the parable of Chuang-Tzu and the fish:

Chuang-Tzu and his student pass a fish while on a stroll. (The Master and his Student are the ones that are on the stroll; the fish is presumably taking a swim.) Chuang-Tzu contemplates aloud how nice it must be to be a fish; his student replies, with famous naiveté: “but you are not a fish, so how can you know?” Even more famously, the Master replies: “You are not me, so how can you know that I do not know?”

The parable summarizes a problem that contemporary preachers are content to overlook in its entirety: how can one judge the internal experience to be any different, based only upon differences in the external, homely details? Salinger writes: “In making sure of the essential, he forgets the homely details; intent on the inward qualities, he loses sight of the external. He sees what he wants to see, and not what he does not want to see. He looks at the things he ought to look at, and neglects those that need not be looked at.” Of course, Salinger’s obsession with Vedanta and Mahayana Buddhism is charming, especially as a testament to the universal appeal of traditionally Indian disciplines, an appeal which those same disciplines attest to. Yet secular critics might always contend that the Hindu cosmology, one professing an underlying identity beneath the veil of illusory external distinctions, is nothing more than an archaic leap of faith, that even in defiance of the Indian caste system the Buddhists failed to address social problems as more than mere illusions of the mind, that the appropriation of these customs by Westerners was misguided, and that metaphysical claims to our “underlying, Spiritual” unity amount to nothing more but failures and excuses not to create an OVERLYING, PHYSICAL unity.

That may all be very well and good for the Western rationalist. Yet the Dharma is no more a “leap of faith” than this neoliberal attitude and platitude. Rather than undertaking a thorough archaeological excavation into the history of neoliberalism, however, I have only to appeal to a common sense that, cursory though it may be, reveals the intrinsic contradictions in the doctrine of postmodern “sensitivity”. Namely: while I might judge your conditions externally to be distinct from those of others, and while by that same token I may determine your external conditions to be more akin – more “like” – those of one of your neighbours than those of another, the fact remains that, insofar as I do NOT know what the Nature of your EXPERIENCE is, I can judge it to be neither better nor worse than that of EITHER of the latter parties, nor myself, so it becomes not ONLY impossible for me to pass JUDGEMENT upon your internal conditions, but ALSO to INSTRUCT OTHERS IN HOW TO TREAT YOU. All Ethics must remain External.

When the Reverend King dreams of the day that the children of the world are judged not by the classification of their melanin but rather by the “content of their character”, he is not making a metaphysical claim to their Souls; he is referring, rather, to something so self-evident that he ought not to NEED to make mention of it: external behaviours whose intent can be inferred by cultural context. “What it’s like” to have Good Character has nothing to do with the determination of Character to be Good, though it HAS been observed, especially in modern dramas like Breaking Bad and BoJack Horseman, that “what it’s like” to have BAD Character CAN be used to UNDERSTAND WHY a Character is Bad, though only AFTER the Character has been deemed to BE BAD by a purely EXTERNAL RUBRIC, untouched by psychology. Walter White breaks bad because of any number of factors, including greed, desperation, shock, pride, a sense of familial obligation, condescension, spite, envy, jealousy, financial anxiety, delusions of grandeur, and the rational detachment of a natural scientist. (Ironically, he makes Anakin Skywalker look far more straightforward than critics of the Star Wars prequel trilogy make him out to be.) Yet none of these internal factors matter ethically until the death toll gets high enough, and it’s anyone’s guess when Walter crosses the final “line” beyond which he is damned. That this line is vague does not change the fact, however, that his fellow CHARACTERS know where to draw it. It follows logically that, in our own Life Dramas, we judge bad behaviour not by “what it’s like” to do it but rather by its consequences for us, and while this does render us biased in evaluations of intent, it establishes a precedent for cultivating more righteous modes of being. “Content of character” is not a vague appeal to what lies beyond the veil of objectivity; it is objectivity incarnate, and King’s appeal is to the Intellectual Common Sense of both his fellow clergymen and his congregation: how long must we tolerate utter stupidity?

As it turns out, stupidity comes in many forms and varieties, some Red and some Blue, some in the Aisle on the Right, others the Left. When rapper Everlast sings about “What It’s Like” to have to be homeless, to abort a baby, or to lose a drug war, he does not claim to KNOW what it is like to lead these lifestyles, but he nonetheless condescends upon those who do not know it EITHER (which is not unlike Chuang-Tzu condescending upon his student, who does not know what it’s like to be Chuang-Tzu, but it is far MORE akin to the student PRESUMING that the Master misunderstands a fish). Yet knowing “what it’s like to have to choose” is NOT what determines a choice to be Good or Bad; it simply determines the choice to be difficult or easy, and all responsible adults DO confront that fact in making choices.

Consider, therefore, “what it’s like” to be diagnosed with a mental illness. Clinically, from the outside looking in, my brain is deemed to be deficient in a certain quality we might refer to colloquially as “Presence of Mind” (if you will pardon the Spiritualizing metaphor). This is only PARTIALLY determined by my behaviour, which MAY OR MAY NOT be unethical, but which is not always “correct”, either politically or interpersonally, and often both are one. By and large, for me to be deemed criminally “insane”, rather than just criminal, I must represent the World through a lens which is deemed to somehow be critically skewed. Perhaps, popular science suggests, were the organ of mental function to be altered chemically, the subjective factor would follow, and out of that a healthy, happy, and holy citizen shall emerge!! That is a tempting offer indeed, and it was just romantic and farfetched enough for my late teenage self to give it a try, yet unfortunately it is just as Absurd and Self-contradictory as the woman who broke my heart to begin with and left me ranting and raving*.

The simple fact of the matter is this: that there is no reason to presume that someone will be healthy, happy, AND holy all at once, except by the Grace of God. Some people are none of the above; others: one of the above, or two. All in, there are as many as eight different combinations of personality created just from any three psychometrics. People can be happy and holy, but not healthy; they may be healthy and happy, but unholy. Et cetera et al. Even if one succeeds in changing my internal experience and hence my outlook and mood, there is no guarantee that I will behave in the manner that THAT PERSON (or my legal guardian) may wish, and insofar as the authorities judge me NOT to be an autonomous agent but a manipulable object, then they are NOT justified in passing MORAL judgement upon me, and “holy” becomes distinct from “obedient”, “predictable”, “compliant”, and “likeable”.

From a traditional, ethical standpoint, this much is a given, yet it is NOT a given if the distinction between my INTERNAL EXPERIENCE and my EXTERNAL BEHAVIOUR is not Respected. I would be more than happy to accept the consequences of my deviant actions, but only if I am tried as an autonomous agent who is in control of his own choices. Yet this right is denied me if, according to the peculiar manner in which I describe the World, you judge me to be INCAPABLE of responsible action. While the psychologist might not admit to “knowing what it’s like” to be schizophrenic, she will nonetheless behave as though she “knows better”, simply because those who tend to inhabit this ambiguous mental territory tend to behave in unpredictable ways which make sense only to them. Yet if one cannot JUDGE the experience which one does not have then one cannot judge the other for HAVING such an experience, and if one CAN judge that experience which one does not have, then it is only because that experience is by NO MEANS INFERIOR to one’s own. In EITHER case, one is left with the impossibility of using the other’s “madness” as an excuse for oppression, and of any bad behaviour the perpetrator can be judged only by the CONTENT OF CHARACTER: an EXTERNAL factor.

It is for these reasons that it does not matter whether I know “what it’s like” to have an abortion or not; in judging the woman for her actions, I am less misogynistic than those who otherize her experience. In judging the black convict, I am less prejudiced than those who speak of the “African American Experience” (not to be confused with the Jimi Hendrix Experience, which is available to anyone of any class, creed, or colour). In judging the villainous psychopaths in the film Parasite, I do not mean to say, “they are evil because they are poor” or even “because they are driven mad with feelings of inferiority”; I simply judge them to be evil because of how they devastate the few people who treat them trustingly. The External remains the arbiter, and it is to this – how OTHERS are affected by my actions, not how *I* feel – that I pledge myself. Yet this requires me to bring INTO Public life that which was previously presumed to be private: Logic and Intuition. When I express my opinions, I am not simply venting my own angst at the expense of how others are affected by my words; I am engaging in a collective discussion towards common moral ends. Without such a discussion, there can be no true “Society”.

 

*Clearly, I am being flippant here, but there is clearly some unresolved baggage, and I WAS afflicted by her mistreatment.

 

As an addendum, I should refer back to a fleeting example: BoJack Horseman. Raphael Bob-Waksberg’s postmodern fable illustrates a man-horse who is neither healthy, happy, NOR holy, and his internal struggle is running a race with his external conflicts, and the smart money’s on the latter steed. Yet is BoJack a nightmare? No. While he is thoroughly an aesthetic character, he is also an Absurd Hero whose foremost nags are every bit as arbitrary, biased, and self-indulgent as he is. “What it’s like” to be him does not excuse him, but the Life of a Fish does not incriminate conclusively.

[({Dm.R.G.)}]