Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Tuesday, 27 August 2024

A Terribly Serious Adventure: Philosophy at Oxford 1900-1960

A Terribly Serious Adventure: Philosophy at Oxford 1900-1960 by Nikhil Krishnan is a history of a way of doing philosophy that was admired by some, reviled by others, and unnoticed by 99.9% of the population.(1) If you know who J L Austin, Gilbert Ryle and Philippa Foot are, you will probably find this a page-turner. If you don’t, and you embark on it, you may wonder who are these people, and how did they get there?

The academic world, especially at Oxford, was very different in the first half of the twentieth century. An undergraduate who passed the legendary Greats examination in Greek and Roman history, philosophy and literature with a first-class honours, who also made a good impression on their tutors, and was prepared to live the life academic, could more or less walk into a job at a university somewhere, the very best getting a Fellowship in one of the Oxford colleges. Tenure at twenty-three, no PhD’s, no post-doc hell, no publication record, no pressure to publish anything, and no enormous student debt. Different days.

The people in this story were very clever boys and girls who could (usually) learn foreign languages quickly, construct arguments in the approved style, and were good “exam-passers”. They had no maths and less science, but many were more familiar with exotic European philosophers than their writing would suggest. Many of the earlier generation served in one back-room way or another in various parts of the Armed Forces during WW2 - J L Austin was in charge of intelligence about the German Army for D-Day - but afterwards made very little of it, partly because they were sworn to secrecy. (I have long suspected that some philosophers got their jobs as a reward for their war work - especially those at Bletchley.)

“Oxford Philosophy” was not influential because it was important philosophy, it was influential because it was philosophy from Oxford. It is almost impossible to appreciate how insular mainstream culture was in the UK in much of the twentieth century. It was dominated by a handful of institutions: Oxbridge and their university presses, the BBC, a handful of publishing houses, The Times, The Times Literary Supplement, the Royal Ballet and Opera, the RSC, and a few more. What mattered was whether one held a position within these institutions, or were backed by those who did. Back then, the appointments were made on the basis of how a bunch of chaps felt about the candidates. The outside world would see that Oxford had appointed the man, and assume he was something special, because they believed Oxford was something special. Though he might have little originality or ability, on appointment to some venerable Chair, he would find his reputation back-filled and bolstered to match the status of the position.

The “Oxford Philosophers” were not mainstream cultural figures - except A J Ayer whose Language, Truth and Logic is still in print and selling, and Iris Murdoch, who became a well-known novelist. Otherwise a well-educated electrical engineer would have had no idea who Gilbert Ryle, J L Austin, Peter Strawson, Philippa Foot, Elizabeth Anscombe and others were, let alone have read the few things they published in their lifetimes.

So what was the fuss about? Ostensibly, it was the proper techniques for “doing philosophy”. The Oxford Philosophers claimed that many of the questions and confusions in philosophy could be “cleared up” by paying careful attention to the concepts involved. The same view was pushed by Wittgenstein at Cambridge, and by the Vienna Circle and its associates. It wasn’t the thesis that was objectionable: it was the method. This made two claims. The first was that careful attention to our everyday language would show that the philosophical problem was really a kind of confusion or mis-understanding.

On this, nobody will deny it is always useful to start any discussion of a concept by checking the dictionary and Wikipedia, to make sure that other people will be thinking about the same thing we are when the hear the words we use. From there, it helps to have a few paradigmatic examples of the concept at work, with some compare-and-contrast to locate it in the conceptual landscape. If there is a group of people making money in some way from the concept, we need to understand who they are and what they are being paid to provide. If there is legislation that uses the concept, we need a glance at that. If it has a history, it might help to read that. If the concept has a technical use in a science, we need to decide how to treat that. If it is being wielded by activists, we need to be aware of that, if only to avoid being distracted by their controversies. Is the concept unique to our native language, or does it have equivalents in other languages, and if it doesn’t, how do they manage without it?

But this is not what the Oxford philosophers meant. Their second claim was that ordinary language, as spoken by the kind of people who get a First in Greats, contains all the concepts and distinctions we need to clear up the confusion. There was no need for appeal to scientific theories, because there were no empirical claims involved in defining the distinctions and concepts. And there was no need for philosophical theories, because, well, all we need are ordinary-language ideas, which are held to be a-theoretical. (Nobody said that last bit out loud, even though the idea that language provides and constrains our conceptual resources, and hence, amongst other things, our ability to make distinctions and analyses, had been around for at least eighty years.)

This, combined with what many felt was a smug and parochial tone, was the reason many other philosophers felt Oxford Philosophy trivialised philosophy. It wasn’t that the Oxford people were wrong, it was that they were shallow. Iris Murdoch said in a review of Ryle’s The Concept of Mind
[It] evokes a picture of a world in which people play cricket, cook cakes, make simple decisions, remember their childhood and go to the circus; not the world in which they commit sins, fall in love, say prayers or join the Communist Party.
Murdoch’s point is that the force of Ryle’s arguments depend in part on the blandness of the examples. We might be willing to make a distinction between this and that when applied to frying an egg, but not when applied to spray-painting shop windows in protest. Nuances that can be heard when the moral volume is low, are lost when the moral volume is turned up, and other sounds become audible. We can accept Ryle’s arguments, but feel that, somehow, the conclusions have a very limited application in our coarser, more chaotic, lives. (Krishnan makes the point that an adequate philosophy must apply to lives in which we both play cricket and commit sins, cook cakes and fall in love, remember our childhoods and say prayers, and go to the circus while members of the Communist Party.)

It was not and still is not my preferred way of doing philosophy. Detailed, tortuous arguments picking apart (as it might be) Jones’ view of Smith’s account of excuses and reasons, leave me feeling uninformed and slightly dizzy. I don’t care that Jones’ views are full of holes, and Smith’s aren’t much better. I want an account of excuses and reasons that learns from the mistakes of those who came earlier, and doesn’t show off its erudition by burdening me with a list of those mistakes and why each is wrong. Unfortunately, it was the preferred way of doing philosophy at the university where I was an undergraduate: I barely survived long enough to make it to the LSE.

The results of an ordinary-language analysis, of some concept being grossly abused by a (as it might be) psychologist in their pop-science book, can be useful for dispelling the confusion and wrong conclusions created by the abuse. But the results must be used in passing, as simple facts, and their source never mentioned, for fear of boring or puzzling the reader. Ordinary language analysis is one of many items in the philosopher’s tool-belt, to be used when appropriate. One should never display one’s philosophical tricks and techniques. Nobody is interested. They want to know about the subject.

In the end, that might have been the reason Oxford Philosophy attracted so much hostility. It was too much about itself, as the work of very clever people can be, its topics chosen not because they were interesting to us, but because they showed off the method and the cleverness.



(1) The population of the UK was about 55m at the time. 0.1% of that would 55,000 people, and that’s an overestimate. Don’t forget there were only around 30 universities in the UK back then. At 40 people per year doing philosophy (about the size of my undergrad class) over 30 years, that’s 36,000 students, plus (say) 500 lecturers and professors.

Tuesday, 20 February 2024

The Lockdown Policy Test

I propose the Lockdown Policy Test. A policy supported or promoted by anyone who also supported lockdowns, masks, social distancing, the Rule of Six, or other Covid measures, is most likely to be as economically damaging, and socially disastrous as any of the Covid measures. After all, if they were dumb enough, or weak-minded enough, to fall for the obvious stupidity of Covid policies, they will probably fall for other dumb policies as well.

Since the House of Commons, the Civil Service and Local Government is still almost entirely populated with the people who voted for and imposed the Coronavirus Act, and the media is still populated by journalists who went along to get along, and the Universities are still full of academics who stayed silent rather than risk losing their grants…

…we can dismiss just about any policy or issue that any of them are pushing, from the so-called “climate emergency” to sending illegals immigrants to Rwanda, and from Diversity and Inclusion to Low Traffic Neighbourhoods, electric cars, zero-carbon, and yadda yadda yadda.

Judge the quality of a policy by the quality of the people, regimes, and societies that adopt it.

Because now and for the next ten years, we will have a test to judge the quality of the people: did they go along with the Lockdown measures?

Friday, 13 October 2023

How Good Times Make Weak Leaders

Remember that saying Good times elect weak leaders; weak leaders make bad times; bad times elect strong leaders; strong leaders make good times? Let's start by discussing good and bad times.

These apply to the personal and professional lives of the upper managers, administrators and policy-makers (to include the elected legislators) of the major social, media, cultural, State, political and business institutions. Ordinary people can be suffering financial crises, unemployment, dramatic changes in the labour market, and all sorts of other stuff, or of course none of that, and it doesn't count. As long as the upper-middle class (roughly) is having a cushy time, those are "good times". In the UK, that was from the passing of the Maastricht Treaty to the end of 2015: The Second Belle Epoque. Their professional lives were easy, their dominant assumptions about society, culture and economics were unchallenged. China and Russia were behaving themselves, and EU made travel easy, and legislation even easier - all one did was tweak whatever Brussels threw out.

Your kids can't afford a place of their own, that's just the economy. A journalist's kids can't afford a place of their own, that's a serious flaw in the housing market.

If life gets too hard for the Rest of Us, we will start to object, misbehave, go on strike, and make the lives of the UMC (upper managerial class) difficult. That gives them an incentive to make sure that life isn't too hard for the common people.

We can complain about the economy all we like, but one thing we must not do is question the UMC's assumptions about the society, political institutions, and culture. That is perceived not as a threat to their survival - that would be mere economics - but their vision of themselves as Good People who deserve their privilege as a reward for their Goodness. The form that Goodness takes can vary from decade to decade, but since about 1990 it has been about having Broadly-Left social views and ideals. Before that, it was about having Broadly-Right ideals. Challenge whatever is their claim to moral superiority and you threaten them with the disintegration of their identities. In Good Times, the UMC is complaisant and herd-like, and jolly comfortable that is too.

Let's turn to what leaders are. A 'leader' in this discussion is someone who gets to set policy in a particular institution, so that following that policy protects us from sanctions imposed by that institution. A strong leader can bring people along with them, and isn't scared of imposing sanctions: a weak leader is unconvincing, and won't impose sanctions. (Yes, this applies to street gangs as well as Governments.) `Leadership' is contextual: someone can lead in one institution, and follow in another.

Leaders depend on holding an institutional position, and one gets to be a leader by occupying one of those positions. Having got there, it's up to the incumbent to do something, or collapse exhausted by the climb up the greasy pole.

Most of the rest of the people in the institution will follow a strong leader - though some will resist - or they will goof off if they spot a weak role occupant - though some will throw themselves behind policies they see advantage in.

Where do the strong leaders come from in the bad times? They were there all the time, but they weren't attracted by the jobs in politics, the upper reaches of public administration, and other high-profile institutional roles. In the good times there is too much go-along-to-get-along. Too many third-class people. Too much consensus. So the strong people go to where their qualities of character can be useful, or they find a lucrative niche somewhere and enjoy the decline.

Where do the weak leaders come from in the good times? They were there all the time as well. They didn't want the jobs when times were tough, and they wouldn't have been chosen anyway. But when times are good, suddenly good chaps who go along with other good chaps are exactly what seem to be needed. Strong-minded people are all very useful, but they can be a nuisance. In good times, we need co-operation, not conflict. Weak people love co-operating. There's nothing wrong with co-operating, as long as it's with people who share your goals. 'Co-operating' with people whose goals conflict with yours is called 'giving in'.

It's possible for one institution to have strong leaders, while another has weak ones, at the same time. Think of Sweden in 2020: a weak Government of consensus-driven politicians who fortunately were not in charge of public health policy. Anders Tegnell was, and he turned out to be nobody's go-along guy. The Swedes were the only country who did not succumb to the hysteria.

One way weak leaders damage their institutions is failing to fight back against strongly-led activist groups advancing avant-garde goals that threaten the current aims and values of the institution.

Weak leaders can be distracted by internal disputes and high-profile non-issues. This is what happened to the British Parliament between 2016-2021 (Brexit) and the US Government between 2016 and 2020 (the wonderfully named 'Trump Derangement Syndrome'). It's no co-incidence that various avant-garde activist groups made so much progress with their causes during that time, or that the UK and USA Blobs started taking on lives of their own.

How do the required strong leaders get back into the institutions when they are needed? In the UK, it's not by coup or vigorous campaigning. it's by a slower process in which the people who select and elect the candidates for key positions decide that the current lot are a bit wet, and some drier people are needed. A major donor to an activist organisation decides it no longer advances his various goals (it may have become a liability to their social standing or business interests, for instance) and withdraws their money. A Board of Governors decides the last CEO got on perhaps too well with everyone, and now they need someone who can focus on the business needs. These decisions will be made against the backdrop of what the various people sense to be a prevailing sentiment amongst the public - whatever that 'public' might be.

That mechanism relies on the general population containing a range of views on almost everything: this is why enforced consensus is a liability. A variety of views is needed, so that when the time demands this or that view, there will be people ready to explain, publicise and propose ways of implementing it. If everyone thinks, or makes a show of thinking, the same, when circumstances demand a response outside the permitted range, that society will fall victim to those circumstances. This is all basic On Liberty.

The idea that society consists of homogeneous 'Generations' is an artefact of the media and academic obsession with certain institutions, that are able to impose the appearance of a high level of conformity on the behaviour and opinions of the staff. As soon as the institutional control slips, so does the conformity.

Friday, 29 September 2023

What is Jazz (Again): Laufey, Adam Neely, Andy Edwards

What is jazz, and why does it matter? Can we define jazz in such a way that it does matter if something is or is not jazz?

That's effectively what the National Endowment for the Arts did back in the 1970's when it decided that jazz was America's Classical Music, and started handing out grants and awards. Stanley Crouch and Wynton Marsalis locked the NEA into a definition of jazz as a) swing, b) blues, c) improvisation, d) in a pre-1965 style. Here's the list of NEA Jazz Master Fellows since it started https://en.wikipedia.org/wiki/NEA_Jazz_Masters. All great players, all started before 1965, which includes Ornette Coleman, Sun Ra and Cecil Taylor, who are avant-garde. Nope, there aren't many white names on the list, but then that's probably statistically representative of jazz musicians.

So maybe jazz is whatever the NEA says it is, and they have the money and publicity to prove it. In the same way the teachers at Berklee, Juilliard and all the other jazz schools get to say what jazz is, because they set the syllabus and award the credentials for a "degree in jazz". Both institutions adopt the Crouch-Marsalis definition.

Never argue with institutional doctrine: nobody is going to give up their income and status over a point of logic or a matter of fact. Change the subject: hit 'em where they ain't.

Let's do that. Because the heck with institutions. 

For Adam Neely, well-trained graduate of Berklee, jazz is a well-defined cultural practice, gate-kept by academics, the NEA, and some music industry figures. For Andy Edwards, West Midlands drumming legend and epic You Tube ranter, jazz is about creativity and technical accomplishment in the service of freedom and experiment. Which is why he fights for the word.

Sir Karl Popper told us not to fight over words. Fight for your right to party, but not over whether to call it a party.

The party is individual improvisation while playing as a member of a band, within self-imposed limits that might be about chord progressions, modal changes, tunes, or the style of a genre. That genre might be the Blues, Hard Bop, Be Bop, Cool, Modal, Time No Changes, Flamenco, or whatever else (even ghastly chord-scale).

It's about developing your own voice, and being able to find others whose voices fit with yours; it's about producing music that (some) people appreciate and want to hear, without turning into a hack. The material doesn't need to be original, but the expression needs to be sincere: a tribute band can do this, if they love the music they are playing.

Between (about) 1930 and (about) 1966, nobody partied as hard as a handful of men who gave us some of the most sublime, hip, and swing-ing-est music ever played. From Louis Armstrong through Lester Young and Charlie Parker to Miles Davis, Gerry Mulligan, John Coltrane and Charles Mingus, to name a few. It was the chosen music of the misfit, the hip, and people who wanted to stay up late drinking. It was a fabulous moment, but it passed, as all fabulous moments must do. And we have it on record.

Does it matter what "jazz" is? If you're after that sweet NEA moolah, or the recognition of a bunch of old guys and academics, or playing at venues or for records labels which are snobby about these things, then yes. Otherwise NO, it does not. If you're a professional musician, what matters is making money and enjoying what you're being paid to do. If you're an amateur, what matters is that you can have a good time playing with some people who aren't totally weird. And if you're a, uh, home musician, what matters is that you get out of playing whatever it is you want to get out of it.

Tuesday, 19 September 2023

Data is Expensive, Conclusions Are Cheap: How To Fix Research Fraud

It's probably just my echo chamber, but I've seen a number of YT's on scientific fraud recently. This does not shake my faith in Quantum Mechanics, because this isn't happening in real science. It's happening in psychology (evolutionary or otherwise), behavioural economics and other such pseudo-subjects with lousy replicability, and a tendency to pass off small samples of undergraduates as sufficient data. I've read my share of pop-science from these people, and while I've been amused and intrigued, I've never been convinced. The samples are too small. The conclusions are too darn cute, and fit way too well into the current academic Goodthink. Also a lot of it is just plain wrong.

What does one do about all this nonsense research?

Realise that statistical analyses, summaries, graphics, and conclusions are cheap.

It's the data that matters.

Any research project funded by the taxpayer must make its raw data publicly available, along with a detailed description of how the data was obtained.

With no controls over access. In CSV format so we don't have to write complicated scripts to read it.

And at no charge. We already paid with our taxes.

Give us the data, and we will draw our own conclusions, thank you. Research will become valuable because it produces data that people use.

Not because some publicity-savvy academic produces an eye-catching claim.

The infamous thirty-undergraduate sample will simply vanish.

Researchers who provide lots of dimensions of analysis that can be correlated with ONS data will get readers, those who use a few that maybe can't be matched against anything else will be passed by.

It works like this.

Hypothesis: children from single-parent families do better at school than children from two-parent families. 'Do better' means more and better grades at GCSE. So get a sample of single-parent households with kids who just did their GCSE's and another of dual-parent households with kids who just did their GCSEs. Same size, as there are plenty of both.

Recognise that the initial question is attractive but silly. It's the kind of question a single-purpose charity might ask, and if it liked the answer, would use in their next fund-raising round.

"Single-parent homes" are not all the same. Neither are "dual-parent homes". Families are all different. And they are an effect, not a cause. Parental behaviour, sibling examples, household economics, the location, the religion and the culture are causes.

Here's your chance to get some data-kudos.

Get a decent sample size. 10,000 or so of each.

Get the results for the kids. Grade by subject. With the exam board. No summarising or grouping. I've got a computer to do that if I want it.

And get the number of GCSEs the kids were entered for, because Head Teachers game the stats like crazy. While you're doing that, find out how else the Heads game the stats.

Get the details about those households. Age, religion, nationality, gender, political allegiance if any, car owner, rent / mortgage, highest level of education reached by parent(s), subject of degree, employed / self-employed / unemployed / retired / not able to work, occupation if working, postcode (all of it), place of work, large or small employer, private or public sector. Income and sources, expenses and spending patterns. Savings. Help from relatives. Drug use. Exercise regimes.

How long had the parents been divorced before the GCSE exams? How long had they been co-habiting or married? What are the childcare arrangements? What are the visitation rights? How often are these denied? Has the divorced partner lost touch with their children? Are the divorced parents still co-operating with each other over raising the children? Has the custodial parent moved home? How far away are the parents from each other? Was a family member in jail when the kids were taking the exams? Is the father in the dual-household away a lot? Do any of the parents work unsociable hours? Do they use daycare?

See how that data could be interesting to certain groups? Even if they weren't interested in GCSE results?

Did the parents arrange private tutoring? Help their children with their homework? Do the children have long-term health problems? Did they have health problems at the time of the exams? Were they able to revise? What is the school's record in the league tables?

You get the idea. Ask a wide range of detailed questions to cover the vast complexity of human life. Notice when a colleague demurs at something that allows the data to show the influence of (enter taboo subject here). Find somewhere else they can be useful and send them there. Do the same to yourself. The question you resist the most is the one everyone wants answered.

Test the questions. Test the interview process and the online questionnaire (if you must). Do A/B layout and question-order tests. Learn and make adjustments.

Now go out and ask the questions. Tabulate the answers. No leaving anyone out because they missed a bunch of answers. I can deal with that in my analysis. No corrections for this or that. No leaving out the answers to some questions because of "sensitivity" or "mis-interpretation".

That's where you put in the effort. If too many people give incomplete answers, go recruit some more people. Comparing those who gave complet(er) answers to those who didn't to see if there's a pattern.

Put the raw results up on Github or wherever. Along with the questionnaire, the times and dates of each interview, and a video of the whole thing if possible. I want to see their body language to judge which questions are likely to have, uh, aspirational answers. (Okay, that's asking a lot.)

I'll do my own analyses.

The researchers can publish a summary and conclusion if they want. With a keep-it-simple press release for the science journalists. The rest of us will dig into the data and draw our own conclusions.

The people who don't do data analysis can get some popcorn and follow the disputes.

Data financed by private money? Make it public or we get to treat it as self-serving.

Faced with some conclusion about medicines or human behaviour, ask if the raw data and research protocols are publicly available. If the answer is NO, or "you have to pay", dismiss the conclusion, because there is no evidence that you can judge for yourself. Without the data, we have to take their word for it or not, which means we need to judge their competence, honesty and career pressures. That makes it about the researchers, and it isn't. They may be insightful and honest, or they may be academic hacks. You can't judge that either. What you can judge is that they are hiding their data. If they are, it fails the smell test.

Tuesday, 15 August 2023

Wendy Wood's Good Habits, Bad Habits

This book is about using the power of habit to change your lousy diet, infrequent exercise, your weight, and many other such things. The secret is said to be habit-formation: turning whatever it is you need to do into something that you do almost without thinking. By the psychologist's own definition, the essence of habit is thoughtlessness.(1)

It's Behavioural Science. Replication of experimental results is poor, and the experiments, often involving small groups of American college students, are, how does one say this? Lacking in gravitas?

A Behavioural Science experimental result typically sounds something like this: 60% of the people who tried The Hack did 10% less / more of Whatever, while the other 40% did the same amount. The conclusion of the writer of the best-selling book is that you should do The Hack, because it works.

Well, yes it does and no it doesn't.

It works for organisations with large customer / user bases (say, a hospital, or a retailer). Running a campaign featuring The Hack results in the equivalent of a 6% increase in customers when the campaign is running. If the campaign cost is low enough, that may well be worthwhile. There are plenty of parliamentary constituencies with a margin of less than 6%. The Hack could sway the result of the next election - if it's that kind of hack.

It is no use to you or me as individuals. We don't want something that works for just over half of us and then only 10% of the time. We need something that works for us, 5 days a week, 48 weeks a year (nothing's perfect). Alarm clocks, rather than putting the biscuits out of sight and leaving the fruit on view(2).

The assumption that what works on one scale (organisation dealing with a large population) will work on another scale (individuals) should have a name, maybe the Individuals-Are-Crowds Fallacy.

Hacks (bite-size bits of thought-lite behaviour suitable for habit-formation) can make it easier to achieve a goal: you can put your gym gear in the bag the previous day so you don't forget it in the morning rush. It's not going to lift the weights for us, though. We have to do that, and there are no hacks for making it easier. It's supposed to be difficult, or it isn't doing us any good.

Setting an alarm clock is a hack, and so is a To-Do List. You still need to follow-through: you could go back to sleep, and you could ignore the list.

How important are habits-and-hacks? The alternative is said to be willpower, which Behavioural Scientists say is a muscle that gets tired easily and recovers slowly. Except there never was any such psychological muscle.

One's will was an expression of a desired outcome (which is why legal Wills are called that) and by extension, one's will-power was one's constancy and determination to cause or achieve that outcome. Go too far with "willpower" and you wind up with "obsession", "stubborn-ness", and other Bad Things. Don't go far enough, and you're a quitter.

The important part is this: one is only expected to demonstrate it for something that matters, such as studying for and passing exams, defeating the Carthaginians, or losing enough weight so the insurance companies stop calling one "obese". No-one is expected to resist marshmallows, or keep their hand over a flame, except as a party trick.

Parents will go through years of sleepless night hell, sullen teenage hell, tired crying on the way home, hearing some Disney movie for the third time that week, and all sorts of other trials and tribulations, to raise their children. Because that matters. Excuse them if they put on a few pounds in the process.

People who are content with their lives and their physical, cultural, emotional and intellectual condition do not do things to change themselves(3). They do things they enjoy doing to enjoy being them.

People who do things to change their lives and themselves are in some degree ambitious or malcontent.(4) Maybe they noticed that all the senior female executives were blondes and decided to adopt the plumage. Maybe they looked at their gut in the mirror and thought "this can't go on". Maybe they just want to run the 10K a minute faster to be in the next class up. Maybe they saw their Saturday night drinking buddies from the outside that fateful evening, and realised what a bunch of losers they were. Maybe they thought that, at thirty, it was time to learn to drive. Or to stop with the late-night takeaways. Or whatever.

The Dirty Secret of making significant changes to ourselves and/or our lives is that it takes sustained effort, a sharp pair of social pruning shears, and motives that would scare the bejesus out of a therapist.

Somewhere in the margins of that is a place for hacky little habits: I like To-Do lists, but I don't get obsessive about completing them.

By all means flick through Behavioural Science best-sellers or even the academic research if you want to find suggestions for hacky little habits.

And if a Behavioural Scientist offers you a marshmallow now, or two in fifteen minutes' time... you're a busy grown-up: take the marshmallow now and keep the fifteen minutes for yourself.



(1) That's why "bad habit" is almost a tautology, and "good habit" is almost an oxymoron. 
(2) I've done that for years. Hasn't worked so far. Alarm clock works every time. 
(3) That doesn't mean they never move home, redecorate, or go to a different country on holiday each year. No matter where they are, they always take the weather with them. 
(4) There are also malcontents who don't do anything, also called whingers.

Tuesday, 30 May 2023

How To Translate Faraday's Law of Induction into Math

I know what you're thinking. What does he get up to that stops him posting promptly and prolifically? I wish it had something to do with Instagram models and / or  staying up late making music via Garageband, but it is much more mundane than that. Here's a short passage about the translation of Faraday's Law of Induction into mathematical notation that I've been working on for far longer than you might think. If I've done my job well, it should seem obvious. (Some of the original \LaTeX has been butchered to accommodate Blogger.

(starts)

Faraday's Law, more or less as stated by Faraday, is: the electromotive force around a closed path is equal to the negative of the time rate of change of the magnetic flux enclosed by the path. How does this get translated into mathematical notation? We need to know that the `electromotive force' is, in the case of magnetic induction, the work done on an elementary electric charge (such as an electron) travelling once around the loop. Work done moving along a path is always a line integral of the product of a force and a displacement (since `work = force times distance').

As a first step, we re-name those things as variables or constants:

let $\mathcal{E}$ be the electromotive force

let $B$ be the magnetic flux

let $\partial A$ be the path, enclosing an surface A

let $ds$ be a small displacement along $\partial A$

let $E$ be an electric flux field

We can write down the equations quite easily if we are familiar with the vector calculus. Work done is given by the mantra `work = force times distance'. For a small displacement $ds = (dx, dy, dz)$ and a force $E = (E_x, E_y, E_z)$ the product is $E_x dx + E_y dy + E_z dz$ which is $E \cdot ds$ in vector notation. The work done along a line is the sum of such displacements along it, which is conventionally shown by the integral $\oint_{\partial A} E \cdot ds$, giving us $\mathcal{E} = \oint_{\partial A} E \cdot ds$.

Translating the other side of Faraday's Law, Faraday thought of electromagnetic fields as `lines of force' - the more lines, the more force - and the flux of a field through an area was the number of lines of force through it. This was Faraday's way of thinking about line and surface integrals without having to actually use either. 

The number of lines of force within a path is the integral of the (strength of the) vector field over any smooth surface enclosed by that path. (The `any' has to be proved, but it is becomes intuitively obvious after visualising a few examples.) So if we take a surface $A$, divide it into non-overlapping patches $dA(n)$, calculate $\frac{\partial B}{\partial t}(n)$ for the centre of the $n$-th patch, and add the total, we get an estimate of the electromagnetic field strength. Make the patches smaller, and we get a better estimate, which in the limit is the integral 

$\mathcal{E} = -\iint_{A} \frac{\partial B}{\partial t} \cdot dA$

That can also be turned into a conventional double integral by substituting coordinates. Hence Faraday's Law of Induction is translated into mathematical notation as

$ \oint_{\partial A} E \cdot ds = -\iint_{A} \frac{\partial B}{\partial t} \cdot dA$

The left-hand side is the work done, and the right hand side is the negative of the time rate of change of the magnetic flux enclosed by the path. This completes the translation of Faraday's Law into mathematical notation.

This is no more conceptually complicated than if we had translated, say, a passage of Freud from German to English. There is no word-for-word mapping of the two languages, and there are many concepts for which there is a German word, but not an English one, and one must attempt to explain the German concept in English. Using an integral to denote the result of a limit of finite sums is no more exceptional that using a derivative to denote the result of take rates-of-change over ever small intervals. 

We can use some maths to go further. By Green's theorem, assuming the fields are sufficiently smooth, we have

$\oint_{\partial A} E \cdot ds = \iint_A \nabla \times E \cdot dA$

So we can put

$\iint_A \nabla \times E \cdot dA = -\iint_A \frac{\partial B}{\partial t} \cdot dA$

which gives us immediately one of Maxwell's equations

$\nabla \times E = -\frac{\partial B}{\partial t}$

We can prove that, with the rest of Maxwell's equations, this another statement of Faraday's Law of Induction. 

This is no more conceptually complicated than if, having translated the passage of Freud, we then drew a conclusion from the translation and some background knowledge that was not in the original, but helps us understand what Freud was saying. It just looks impressive / mysterious / difficult because it uses undergraduate maths.

(ends)

My thesis is that translating from a natural language into math notation is the same as translating from one natural language to another. It's just that maths is the language in which it is easier to see the patterns and make the deductions.

Tuesday, 11 April 2023

7 Philosophy Books For Beginners (4)

In the previous post, I suggested that Western Philosophy is an attitude. It does not accept authority, and reserves the right to examine anything at any time for any reason. It also commends that attitude to all of us.

How realistic is this, how does it differ from scepticism and outright cynicism?

The law says that at eighteen we become adults, and are deemed to be competent moral decision-makers, except in certain cases of reduced capacity. An allowance is made for the ignorance and recklessness of youth, but only for minor offences. Most children know when they are doing something their parents might not approve of, which is why they are very quiet when doing so. People know what is right and wrong for most of the eventualities of ordinary life. It's at the edges that the judgements can become ambiguous.

Making moral decisions is something human beings (mostly) seem to be wired for. Making judgements about matters of non-everyday facts, or about the plausibility and verisimilitude of theories, seems to require technical knowledge and skill that only a few people might have. At some point, don't we ordinary people need to defer to the "experts"?

How does someone who left school at eighteen judge if String Theory or Quantum Gravity are plausible theories? Surely this is something only suitably-informed physicists can do? Not at all. Anyone who understands that the test of a scientific theory is that it makes new predictions that are confirmed, can ask one question to determine the value of String Theory. What has it predicted that has been confirmed? When? Where? What was the experiment? What was the prediction and what was the result? If an ordinary person is faced with evasions and odd-sounding claims that physical theories should be judged by different criteria, they will and should conclude that someone, somewhere, is hiding something.

One tactic is to reduce what looks like a highly technical issue to something within one's understanding. Some lawyers are very good at doing this, as they know they will need to explain the core issues to a jury. In the case, perhaps, of pollution by a chemical company, nobody needs a detailed understanding of organic chemistry. They need to know that a) many people suffered symptoms A, B and C; b) those symptoms are consequences of poisoning by substance X; c) substance X was found leaking into the groundwater from the abandoned drums which had the defendant's logo on them, and which the records in Exhibit A show were dumped by the company’s drivers. Nobody needs to know how substance X causes those symptoms, only that it does, and reliably and frequently so. Experts and specialists are not allowed to hide behind gobbledy-gook, and indeed, sustained use of gobbledy-gook and protests that, for instance, the law of financial fraud is too complicated for ordinary folk, are usually and mostly rightly taken as a sign that something is being hidden.

Another tactic is to examine the credentials of the "experts". In some cases, such as ballistics, these can be demonstrable and convincing. In others, such as virus-based pandemics on a supposedly "novel" virus, by definition there can be no experts, since it is "novel" and experience from previous viruses cannot be transferred. In these kinds of cases, expect "expert" status to be justified via the Fallacy of Misleading Credentials: a recital of impressive-sounding official positions, academic awards, research papers and previous appearances as an "expert', which on closer examination have nothing to do with whatever is happening now.

This sort of thing requires an understanding of how the world works. Philosophers in earlier centuries had plenty of this, as they were often advisors and private secretaries to members of the ruling class, and sometimes appointed to public office in their own right.

Nobody can question everything all the time. I can't, and neither could Descartes and Hume. Both recognised that ordinary life has to be supported by a web of beliefs held without question for the time being. However, one should always be prepared to question any of those beliefs if a cause arises.

One does not need to be sceptical or cynical to embrace the spirit of Western Philosophy, but one does need a healthy caution towards the claims of the established, the powerful, the dogmatic, the over-confident, those who claim to have Just and Right Causes, anyone trying to sell anything, and above all, never to have any dealings with anyone or any institution which makes money as long as they don't solve the problem. Anyone who brands an argument or idea with a word ending in '-ism' is not arguing but throwing mud. Mud may be dirty, but it is not an argument. One should always remember that propaganda is what they want you to believe, news is what they don't want you to know.

Tuesday, 4 April 2023

7 Philosophy Books For Beginners (3)

My 7 philosophy books for beginners, along with the back-up reading, is pretty hardcore. It's also definitely Dead White European Male, and none of it is post 1960's except the books on logic and argument.

Why?

The central tenet of Western Philosophy is that human beings have free will, agency, and rationality, and hence that we are responsible for our actions and decisions, and in particular for our decisions about the plausibility and verisimilitude of a theory or the practicality and desirability of a social, political or economic policy.

We cannot lay off those responsibilities to any temporal, spiritual, legal or transcendental authority. Such an authority can impose a decision by legal, physical, social, or economic force, but while that is an excuse for our compliance, it is not a reason. And we may have to behave in accordance with the authority, but whether we choose to accept their propaganda is our decision. Neither does “expert opinion” remove the responsibility: we have to use our experience to decide for ourselves whether the “experts” are credible.

Western Philosophy goes against the natural human tendency to want to form and join in-groups, to work within a cosy consensus, and to lay off as much responsibility as possible on (possibly self-appointed) "authorities". The majority of people prefer to live in that way, and that includes the majority of people working in the philosophy departments of universities. (Academics did not cover themselves with glorious dissent in 2020-2022.) This shows in the way much modern philosophy is written. In Anglo-Saxon (UK, US, Australia and New Zealand) academic philosophy, one does not discuss a problem directly, but indirectly through a rehearsal and criticism of previous philosophers' views. The modest philosopher typically presents their views as a modification or updating of the views of one of a handful of Big Names, or better still, someone quite obscure. It's all a bit... cloistered.

Whereas the foundational works were written by men of the world who often had some expertise in the science and mathematics of the time, as well as sometimes occupying positions of political influence. I have said that "mathematics was created by clever people busy doing something else", and the same was true of philosophy. So I wanted to suggest books of that calibre, not tidy textbooks with a bunch of cute arguments about the existence of God, Free Will, Right and Wrong, the existence and nature of the soul and / or mind, and whether Damien Hirst is really an artist. Philosophers have discussed those questions, and still do. (The only thing more embarrassing than philosophers discussing those questions, is non-philosophers discussing those questions.)

Books with dogmatic intent, that push a single line and vilify all who dare disagree, were never going to get a look in. Thomas Kuhn's The Structure of Scientific Revolutions is an argument for consensus and groupthink - even though Kuhn says he never meant it to be - so it would never be on the list. Neither were books full of clever arguments from dubious principles to even more dubious conclusions (Peter Singer, Practical Ethics), since that sort of sophistry gives philosophy a bad name.

Friday, 31 March 2023

7 Philosophy Books For Beginners (2)

Western Philosophy is a group of thinkers, problems and attitudes: it divides into three main groups: the pre-Christian, Christian, and post-Christian. There are other traditions, of which an extensive literature has been generated by the Indian, Muslim, Chinese, and Japanese cultures. We're not talking about those book lists.

With that in mind, here's my suggestion.

John Locke's Essay Concerning The Human Understanding. In the same way that modern science starts with Galileo and Newton, modern philosophy starts with Locke and Descartes. The French start with Descartes, the British with Locke.

K R Popper's Conjectures and Refutations. Irascible, insightful, full of himself and full of ideas and learning, Popper was (allegedly) a tyrant in the lecture theatre and a champion of dissent and criticism in his books. This volume covers a wide range of subjects and points to even more.

Aristotle's Nichomachean Ethics. The first encyclopaedic and systematic philosopher, and the inventor of formal logic, Aristotle used to be called The Philosopher by the medieval theologians. His thoughts on personal conduct and the organisation of the State remain relevant. He wrote for aristocrats, but they seemed to need the same lessons the rest of us do. In a modern translation, it is highly readable.

Adam Smith, The Wealth of Nations. Adam Smith was a philosopher who thought about economics. As a result, a lot of what he has to say is still insightful now. You will learn a fair amount about the economic conditions of the time as well, which is no bad thing.

Machiavelli, The Prince. Often thought of as the ultimate Bad Boy of Philosophy, Machiavelli has long since been out-Badded by Saul Alinsky, Rules For Radicals. But reading that made me feel ill.

Montesquieu, The Spirit of the Laws. Influenced by Locke's Two Treatises on Government, modern European political constitutions descend from Montesquieu.

Ludwig Wittgenstein, Philosophical Investigations. Utterly different from anything that came before or since, this is a record of a philosopher working through his thoughts on language, meaning and many other things. I can't think of another book that shows the messy process of almost arriving at conclusions so well.

As accompaniments, add...

...a history of philosophy. The classic is Frederick Copleston's eleven volume(!) set. A more recent one is Anthony Kenny's four-volume A New History of Western Philosophy. I'd suggest ordering one volume of each through your local library and deciding which style you prefer.

...a textbook on Logic. Try Siu-Fan Lee's Logic: A Complete Introduction

...a book or so on the arts of argument and detection of fallacies. Try How to Win Every Argument: The Subtle Art of Controversy by Arthur Schopenhauer, and How to Win Every Argument: The Use and Abuse of Logic by Madsen Pirie

...a book about the use and abuse of statistics.

Some random remarks:

Plato. Yes he was the first to go into print. Yes a lot of his arguments are set-ups. Try it, and if you like it, by all means read more.

The Stoics. Seneca was the Roman equivalent of Jeff Bezos. You're going to take life advice from Jeff Bezos?

Kant. More people read about Kant's ideas, than read Kant's ideas. He's a tough read. One for later.

Hegel and the German Idealists. These guys could not write clearly, and that's being polite. After you have dealt with the idiosyncratic vocabulary, you have to deal with the idiosyncratic ideas. Ones for later.

Heidegger, Merleau-Ponty, Sartre, Jaspers, and the other phenomenologists. Read these guys after you have read the empiricists. Then you will understand the problems they are trying to solve.

Zizek and the cultural theory guys. This isn't strictly philosophy, but if you're in the mood, it can be fun.

Any pop-culture book. No. Just no. These are the equivalent of McDonalds or Mars Bars. Quick hit, no lasting effect. Your brain cells will rot.

Books in series from Routledge (publishers) and others. These can be useful introductions, but tend to present the subject as a neatly-wrapped package of ideas and arguments. What we don't get is the sense of someone thinking about the underlying concepts and problems at first-hand, and that's what we are after.

Tuesday, 28 March 2023

7 Philosophy Books For Beginners (1)

Okay. The title is silly. But I took it from a YT video. So there's that. It's by an American PhD who has since left the academic world. He regards philosophy as a body of arguments, ideas and texts with which a student must become familiar so that they can join the Philosophers Union Local 305. and take their place as a socialised member of the profession. That's a fairly recent conception of philosophy, which fits in with the bureaucratisation of the academic world.

By contrast, the Big Names thought of themselves as trying to answer a bunch of questions, both constructively by creating new theories, and critically by examining previous theories. Those questions are (roughly):

What Can We Know? (Epistemology) 
How Should We Live? (Moral Philosophy and Wisdom Thinkers) 
How Should the State be Governed? (Political Philosophy, Legal Philosophy) 
What is Beauty and Art? (Aesthetics / Philosophy of Art) 
What is the World Made Of? (Metaphysics) 
How do we argue correctly (Logic) and how do we spot bad and deceptive arguments (Rhetoric) 
Free Will 
The Existence of God(s) 
Minds and Bodies Freedom, Rights and Obligations

All of those are still open questions. There may never be "final answers". The point is the development and criticism of (preferably ever-improving) theories about those things. Physicists resort to epistemology and metaphysics when the going gets tough. Lawyers debate the justification for laws, and what kind of things can or should be subject to law. Standards of beauty have changed throughout history, and today are politicised, or perhaps, marketing-ised.

In addition, there are "philosophies of": attempts to describe and understand the assumptions, practices, knowledge-claims, and justifications of a number of subjects: for instance, Art, Science, Mathematics, Law, Politics, and Language.

If you don't see why these are problems, or if these don't sound interesting, then feel free to leave philosophy alone. I'm not interested in chemical reactions, so I didn't do a Chemistry degree.

There are four types of answers to these questions

Classical Greek and Roman: Aristotle, Plato, St Augustine... Theological / Medieval Philosophy: St Anslem, St Thomas Aquinas, Abelard... Worldly: Descartes, Locke, Hume, Adam Smith, Karl Popper, Wittgenstein, Carnap, Bachelard, Heidegger, Merleau-Ponty... Political: Foucault, Derrida, Judith Butler, Karl Marx, Lenin, Avatal Ronell, Slavoj Zizek...

None of these are definitive. All contain assumptions we can produce reasons for disagreeing with, or arguments that don't quite compel the conclusion. Examining the assumptions and arguments, and developing one's own answers, is what creative philosophy is about.

The philosopher’s tools are propositional and predicate logic; statistical inference; rhetoric; and the myriad frauds, deceptions and fallacies used to befuddle and confuse us.

A philosopher can never have too much knowledge of the societies and economies of the world and their history. St Thomas Aquinas' thesis of the just war needs to be read in its historical context: there were no atomic bombs, drones, and sniper rifles that could kill at two miles available then. But war was still bloody, and killed at about the same daily rate as a modern conventional war. Philosophers who don't brief themselves on the historical circumstances of a writer are doomed to make some silly comments.

Tuesday, 4 October 2022

Where Is Everyone? The Empty Universe Problem

Here's a nice video I stumbled across, about the perennial question of why ET hasn't visited us yet.



Here's another kind of answer: look at our own planet. There are / have been a number of major cultures / civilisations. At the start if the 19th century there were the Japanese, Chinese, Muslim / Arabic, Hundu and other Indian, African, South American, Native American, Aboriginal, plus smaller civilisations on ocean islands.

I may have missed one, but I'm sure we would still have had steam engines, dynamos, AC current, penicillin, powered flight... what's that?

Some of those cultures (Japanese, Chinese, Arabic, Indian) at one stage or another deliberately decided to stop developing? And the rest simply didn't have the resources to develop? It was only the unwashed, disease-ridden, war-inclined, Europeans (counting the Western Russians as European) who developed advanced science and technology? And not all of them at the start.

That's the other answer.

The Universe is full of other civilisations. Most of whom are still struggling to survive on planets with even more marginal environments than ours (and most of our own is only good for the fishes, and a lot of the rest is sand, rock and ice), while the others at some stage decided to stop with all this intellectual development lark. It's a very popular political policy for the ruling class: wait until the circumstances are nicely beneficial for the rulers, and set everything in aspic forever. As long as everyone on the planet does that, it's going to work. Feudal bucolic bliss forever.

The question isn't where is everybody, it's what makes rulers tolerate revolutionaries and even take up the new ideas?

Never mind being alone in the Universe. Imagine if we weren't, and then found out that everyone else was pleasant but didn't have one idea to rub between them?

What do you mean: you don't need to imagine that?

Tuesday, 28 December 2021

The Anthropic Principle (Again)

Apparently Ed Witten has abandoned all rational thought about the fundamentals of the Universe and embraced a version of gasp! the Anthropic Principle. At least that's how Peter Woit sees it.

The Anthropic Principle is an answer to the question why are the fundamental laws of physics, and the values of electron mass, charge and the other fundamental constants, so nicely tuned to make it possible for human life to appear?.

The Anthropic Principle says, very crudely, that if they weren't, we wouldn't be here. To stop that being a tautology, it is taken to mean that the values of the physical constants are not compulsory. There are many values the fundamental constants could take, and most of them lead to a Universe that would be hostile to human life. We might be able to show more, which is that a Universe that started off with one or more fundamental constants that were very different would somehow never really get started: it might never cool down enough to become transparent, or it might fly apart because the force of gravity was too weak... there are all sorts of reasons. This would show is that if the Universe was stable at all, it would have to be life-friendly.

The Non-Anthropists want the Laws of Physics to be such that only Universes fit for human life can and must form. and only those Universes.

There are seventeen or so fundamental parameters in the Standard Model, and none can be derived from any of the others. The Non-Anthropists are claiming there is a set of as yet unknown Laws of Nature / Fields / Particles, without any arbitrary numerical parameters, that in turn determine the fundamental parameters of the Standard Model. After decades of work by some of the smartest people ever to walk the planet, we are nowhere near such a theory.

Suppose we did find such a set of fundamental-constant-determining laws. Would this answer the Non-Anthropists' question?

It might. But some ten-year-old would perk up and ask: why those laws? Why not others? .

The infinite regress of ten-year old's questions.

So there has to be a point at which we say "ENOUGH" about explanation, even in physics. I can safely say that any phenomenon that requires 10,000 engineers, a 13 TeV, 27-km accelerator, plus hundreds of hours of statistical analysis to find, will not be used by any medical equipment manufacturer. Or anyone else. For all practical purposes, the Dirac equation and its associated particles are "ENOUGH".

This is really the Non-Anthropists's problem. They want mo' research: to abandon smashing ever-higher energy beams of hadrons and finding no "new physics" year after year would be some kind of abandonment of the Human Project. Like not subsidising contemporary composers whose music is read more than it is performed. (Apparently actually performing one's work is passe. The Kool Kids pass around their latest compositions as MIDI files by e-mail.)

Hope springs eternal in the Non-Anthropists' breast. Next year someone may discover the Missing Laws / Fields / Particles.

I'm not saying they aren't there to be found. I don't know.

I am saying that, if we did find them, it would not help us reduce our carbon emissions, or whatever Liberal causes Non-Anthropists espouse. It would not cure cancer, or create a universal vaccine.

I guess I'm saying we know ENOUGH fundamental physics to work on all the other problems we need to solve.

Friday, 26 November 2021

Philosophy of Mathematics - Number Theory

Off in another part of my thoughts, which have been on hold for a while, I have been trying to work out some ideas on the philosophy of mathematics.

I have two theses. One is about the relationship of abstract mathematical ideas to various types of measurement or geometric properties. If you want to know how the various derivatives on curved spaces arise from the simple issues of co-ordinate changes, it's all there. The other is a methodological thesis, that the purpose of mathematics is to provide tools and techniques to solve problems that arise from modelling physical and other processes, and to understand the scope and limits of those techniques. Creating and solving the equations of the mathematical models is what's usually called "applied mathematics", while understanding the scope and limits of the techniques is a lot of what's called "pure mathematics".

And then there's Number Theory. Which is about numbers. Not mathematical models.

You know that Langlands thing that all the Kool Kids are working on?

Yep. Number theory. Finite field number theory at that. Geometric Langlands is even more abstruse.

It takes genius-level insight and technique to understand the more recent developments in Langlands. That's the point: if the specialists can barely follow it, how is it going to be any use to some poor post-grad working on differential geometry at the University of Ennui-sur-Blase?

The social purpose of mathematicians is to teach other people - physicists, statisticians, epidemiologists, computer scientists and programmers for example - how to use the problem-solving techniques mathematics offers. What mathematicians do in their spare time is their business: they need a decent laptop, a whiteboard and some paper and pens: math is cheap compared to fundamental physics.

The Langlands guys can do what they want in their spare time. But it's a rabbit-hole. Maybe it's a big, well-lit rabbit-hole with all the health and safety gear and plenty of mechanical digging tools, but it's still a rabbit-hole. Unlike some of the rabbit-holes mathematicians have buried themselves into (functional analysis, for instance), Langlands is not going to produce anything useful to regular working stiffs (for instance, functional analysis produced the theory of weak solutions to differential equations, which is very useful). I feel confident saying that because Langlands is about structures the rest of mathematics just doesn't use.

(Rabbit-holes are as opposed to specialisms, which are very specific subjects that have useful applications in the real world or other parts of maths with real world applications. Like research in PDEs.)

Maybe "rabbit-hole" should be a term of art in methodology. It's a line of research that has no obvious application to any existing problems or in other branches of maths. The scientific version would be a research programme that was making theoretical progress but no empirical progress (was not making new predictions). A rabbit-hole may branch up to the surface every now and then, as applications to problems in other branches of maths are found, but generally once dug, the researchers dig away happily underground.

In this case I would be saying that Number Theory was a mathematician's pastime, and that other very abstruse, or very off-beat, programmes, are for all the sophistication, esoterica for the aficionados. Which doesn't sound too dramatic.

Thursday, 7 October 2021

Situationism: Why?

I made the mistake of re-reading a book about the Situationists recently (The Beach Beneath the Street: The Everyday Life and Glorious Times of the Situationist International by McKenzie Wark). I'm going to explain why I did this so you don't have to.

Guy Debord, The Society of the Spectacle, psycho-geography, derives, detournaments and potlachs. A lot of their best jokes wound up as graffiti on Parisian walls in 1968.

I still didn't get it. What were they complaining about, exactly? What we used to call consumer society back in the 1960's? The Invisible Committee complain as much, forty years later, about self-improvement and (what amounts to) the ubiquity of the media conglomerates. What is it with French intellectuals and pop culture?

Something about pop-culture in the 1960's made Guy Debord think something new was happening? Organisations were starting to understand how to manipulate the news media. There was more advertising and it was more eye-catching. Even though the Beatles reminded us that money can't buy me love, the Sunday supplements were telling us that some nice new furniture would sure make life more comfortable and stylish. Pop-culture might have been trivial or merely amusing in the past, but now, Debord seemed to be suggesting, it was being used to was alienate ordinary people from each other and from a sense of community and commonality. For the nefarious purpose of making Capitalists richer.

Seems to call for a revolution of some sort. For French intellectuals at the time, that could only mean a political revolution. Wait. Didn't the Russians try that? And it didn't really work out too well. The Chinese weren't doing so well either, for all the hero-worship of Mao Tse-Tung. Political revolution without an accompanying social revolution had proved to be meet the new boss, same as the old boss. Political revolution was no longer possible, but without it, all other forms of 'revolution' are mere changes of fashion. Quite the corner to paint oneself into.

Nevertheless they felt that one has a duty to do something to protest, undermine, and generally not be so damn gung-ho about Capitalism and all its works. Hence the celebration by some French intellectuals of la perruque (otherwise known as 'skiving' in English), of minor acts of sabotage, of not going along with the system, petty thefts of time (visiting the dentist in work hours without 'making up the time') and other resources (searching for personal purchases on the company internet). The Invisible Committee, descendants of the Situationists, suggest communes that survive on a mixture of Welfare fraud, self-sufficiency, and part-time jobs. Even they admit that won't last long, but they don't suggest the next step. And it amounts to saying "find like-minded people", which is the last resort of the desperate.

These are petty acts, literally petite: 'small, insignificant'. The difference won't appear to the third place of decimals in the annual accounts of Groupe Casino (owners of Monoprix and others) or Amazon. That pettiness is the reason I just don't get the Situationists and their descendants. Haven't people been doing this since the first Egyptian to hide round the back of a pile of pyramid bricks?

Situationism and its descendants, Invisible or not, seem to have been taken up by people who don't find their current life entirely satisfying, but don't find it dis-satisfying enough to do anything about it. They do not want to engage in, say, Trade Union activism to improve their working conditions. Many of them have jobs that pay reasonably well but are mere bureaucratic roles (university lecturer, for instance), and they want to believe they are not just drones. They engage in la perruque, pay tradesmen in cash, insult everyone else's job (by calling it BS), and maybe even pay cash instead of card. This proves to them that they are resisting. For what that's worth.

Probably not the supporters Debord and the others were looking for, but in the end, a theory is judged by how it really-exists, by the company it keeps.

Thursday, 12 November 2020

John Rawls and Really Existing Distributive Justice

Recently,  someone called Zeke Emanuel, who is a "Coronavirus Advisor" to the man who might be President of the USA, said that the Pfizer vaccine should be handed out to poor countries first. It is a problem of distributive justice, he said.

If you ever thought that philosophers were all harmless scribblers, then think again. One of them turned out not to be, and it wasn't Nietzsche. It was a boring political philosopher at Harvard called John Rawls.

Ever wondered where all those Social Justice Warriors and their ideology came from? The money may come from all sorts of sources that scuttle away at the approach of investigative sunlight, but the idea comes from John Rawls.

In 1971 he published A Theory of Justice . I was a philosophy student at the time, and I bought a copy. I started to read it, and soon ran out of energy wading upstream against the awful syntax and the endless digressions and discussion of counter-arguments I wasn't even interested in.  Even without getting too far in, I had the feeling that Rawls was pulling a fast one. In fact I was sure of it. 

Justice is the application and enforcement of the laws. It can be done well or badly. Amongst the ways it can be done well is that it is `blind': it treats everyone the same. 

That has now become controversial: mere `blindness' to the individual is not enough. Now we have to take into account their exact degree of victim status. Race blindness is racism. Gender blindness is sexism. Anything that does not allow the victims compensatory privilege is oppression.

For all that, you can thank John Rawls.

In his 1971 book, Rawls was pushing a particular conception of justice - he called it Justice as fairness. Rawls' idea of fairness was that a society is fair if it was arranged in such a way that the least-advantaged were better off than they would be under any other arrangement. Which is not what you and I mean at all. Justice for Rawls is not something procedural about the law, but about  the distribution of the resources of an economy and society.

Rawls claimed that this was a conclusion we would reach if we were making the rules of justice from scratch, but without knowing what position we held in society, if we were rich or poor, or even if we had marketable skills.  If we treat this as a test - would you approve of that law if you were poor? - it has  a use, but as the moral equivalent of Cartesian doubt, it just won't work. And he never explained why rules made by a bunch of people with serious psychoses (they do not even know if they are able-bodied, intelligent, have social skills, friends, children, jobs; they know how society and the economy work but not how they got that knowledge; the list of impossibilities goes on a while)  should be superior to those made by people who know who they are, and also know they are lucky to be so fortunate.

The idea of distributive justice (aka 'from each according to their abilities, to each according to their needs') sounds attractive.  But the flaw is built right into the idea. For people to be `disadvantaged', there has to be a norm, which is also the norm for being `advantaged'. If the reason for the disadvantage cannot be overcome with hard work, social skills, education or a trade, if it is held to be structural or innate, then it is insurmountable, and that justifies a massive State bureaucracy dispensing welfare and administering hiring quotas, positive discrimination, and unrestricted immigration (because distributive justice knows no national boundaries). 

If that sounds like America today, that's because America for the last forty years has been the world's experiment in really existing distributive justice. Just as Russia was for really existing socialism. An idea that can be hi-jacked so easily by apparatchiks and political grifters is a bad idea.

And what does one say to someone who takes the taxpayers' money for his salary, and then tells those taxpayer they have to wait in line for a vaccine so they can pay for the rest of the world to get it first?

Voila, monsieur, la madame Guillotine perhaps?

Thursday, 7 May 2020

The Raven Paradox

I cannot believe that anyone is still discussing this, but Sabine Hossenfelder did recently, as did UpAndAtom in mid-2019. Both present it as a serious issue for the idea of evidence and hence the scientific method.

The paradox is due to Carl Hempel, one of the many philosophers who circled round Rudolph Carnap and the Vienna School. They loved them some logic, and this really is.

Consider the hypothesis "All ravens are black". Evidence for this would be a black raven. A counter-example would be a white raven. So far so obvious. But "All ravens are black" is logically equivalent to "All non-black things are non-ravens". The evidence for that is a white tennis shoe and a red tomato. So on the principle that two logically equivalent statements should have the same evidence base, white tennis shoes are evidence for "All ravens are black". Which of course they aren't.

Which is supposed to be a paradox.

Which it is only if we stop to admire it for too long.

It isn't a paradox. It's a sign that our idea of what counts as evidence is nuanced enough to distinguish between statements that are equivalent in the predicate calculus. Nothing says that logical equivalence trumps all other forms of equivalence or lays waste to all other distinctions. Unless you're the kind of person who hung out with the guys at Carnap's Bar and Grill.

The Raven Paradox is a useful edge case: a theory of evidence should not fall foul of it.

Notice that to a falsificationist, there is no problem here. Confirmations don't count, only falsifications. White shoes do not refute the raven hypothesis, but falsificationists do not count the number of refutations, as inductivists do count confirmations. One refutation is too many, and a hundred confirmations are too few. (Ahem.) Notice also that the only things that falsify "All non-black things are non-ravens" are also ravens of any non-black colour. So the positive and its contra-positive have the same counter-examples. Just another logical superiority of falsificationism. But I digress.

Another approach is to notice that white shoes also confirm the claim that "All ravens are green", or indeed any other colour. We might say that if a piece of evidence confirms an hypothesis H(black) and also H(green), H(purple), H(puce) and so on, it is in some sense trivial with respect to that set of hypotheses. It's not what we are really looking for, which is that every time we see a raven, it is reassuringly black. This is an attempt to capture the necessary quality of relevance that evidence must have. It is not perfect, but it's a start. I'll leave the lads at Carnap's Bar and Grill to debate the details.

Instead of trying to resolve the paradox, we should ask: how did we get here? What are we assuming that creates the paradox? Is it true? What are the other assumptions we might have in their place? Who says that "equivalent with respect to the predicate calculus" is the relevant equivalence? Why not "equivalent with respect to the legal concept of material relevance"?

Which would send the ravens flying.

Monday, 6 April 2020

The Surprise Hanging Paradox


I read a version of this paradox many years ago, thought it was nonsense, but couldn’t work out why. Recently I read a different version and understood why it was a silly paradox. Here’s the usual formulation:
A judge tells a condemned prisoner that he will be hanged at noon on one weekday in the following week but that the execution will be a surprise to the prisoner. He will not know the day of the hanging until the executioner knocks on his cell door at noon that day.

Having reflected on his sentence, the prisoner draws the conclusion that he will escape from the hanging. His reasoning is in several parts. He begins by concluding that the "surprise hanging" can't be on Friday, as if he hasn't been hanged by Thursday, there is only one day left - and so it won't be a surprise if he's hanged on Friday. Since the judge's sentence stipulated that the hanging would be a surprise to him, he concludes it cannot occur on Friday.

He then reasons that the surprise hanging cannot be on Thursday either, because Friday has already been eliminated and if he hasn't been hanged by Wednesday noon, the hanging must occur on Thursday, making a Thursday hanging not a surprise either. By similar reasoning, he concludes that the hanging can also not occur on Wednesday, Tuesday or Monday. Joyfully he retires to his cell confident that the hanging will not occur at all. The next week, the executioner knocks on the prisoner's door at noon on Wednesday — which, despite all the above, was an utter surprise to him. Everything the judge said came true.
The mistake is to pay any attention to all that pseudo-logic. You’ve been told that one day next week, you’re going to be hung. And that it will be a surprise.

No it won’t.

You know it has to be one of the days next week, and at the moment you hear it you know each day has a 14% chance of being the day. As each day passes, the probability increases. That’s no basis for surprise. You can only be surprised if you think there is a 0% chance of it being the day.

On the day itself, your proper, downright cool reaction should be Wednesday, huh? Well, it had to be some day.

But what about all that nonsense-logic? The Judge’s ruling, is contradictory. Your hanging can't be a surprise if you know it's what awaits you. Real logic tells us that you can prove whatever you like from a contradictory statement. No wonder you can twist a bunch of noodle-logic to prove that you’re not going to be hung. The reason you can’t find anything wrong with the argument is that there isn’t anything wrong with the argument. The flaw is in the premises, and the argument distracts you from that.

Or you could say, it’s what happens when you treat a probabilistic concept like surprise as if it is a two-valued one. You can be a little surprised.

Monday, 9 December 2019

On Death

Some philosophers are obsessed by death, seeing it as some kind of defining event in the human condition, but more than that, as a kind of swindle. Death steals life from us. Just when we got it all figured out and are no longer driven by tyrannical hormonal urges (either ours or the childrens’) - bosh! The Grim Reaper comes along and spoils our fun.

Or something like that.

The death of healthy young people is theft, a moral flaw in the Universe. They really have had their lives stolen from them. Old gits like me, not so much. I’ve had my life, made what little of it I could, and my time has passed.

Suffering is another thing. I regard death, mostly, as a release from suffering, and especially the suffering of injury, disease and old age. A young person who lives in paid and has to spend an hour a day on some machine is being released by death, not cheated.

Death was a release for my friend Terence last year, my friend Chris died in his early sixties from the after-effects of prostate cancer, after almost ten years of a second-chance after the first operation that gave a happy family life in those years. Another man I knew, Richard, fell over in the bath after a seizure. He was in his mid-forties. Outwardly his life looked just fine, but his emotional life was something out of a 1950’s black and white English movie, the ones with the domineering mother. Richard’s death was unfair: he still had time to change. My father died peacefully in his sleep after a post-operative blood clot hit his heart.

It’s not death that’s scary. Either nothing happens, you go to heaven, or come back as a donkey, depending on your religious belief. Our death, as Wittgenstein remarked, is not an event in our lives. It’s an event in other people’s lives. In our lives we are immortal: we are only mortal in the lives of others.

It’s dying that’s scary. The pain from the fatal injury or the terminal disease. The fast fading of our health and powers. The sense that we are becoming irrelevant, and maybe even a burden, we who only a few years ago carried the burdens of others. I’m sure there are pathological states (see those 1950’s English movies) best left unexamined.

Death is, ultimately, a release from dying. Our dying does happen in our lives, we do experience it, and we’d rather not.

Monday, 24 June 2019

Probability, Events, and People

I ran across this remark in a dark corner of the Internet to which I will not leave a link:
What (my critics) do is attempt to apply GENERAL statistics derived from a population of millions to their own individual situation despite the fact that such statistics are totally meaningless when applied to ONE SPECIFIC individual.
So close, and yet so far.

Probabilities apply to events, and it is individual events to which we cannot apply probabilities. A horse-race is a one-time event in terms of course, condition of the ground, horses and jockeys. It won’t be repeated. Probability needs repetition. The odds a bookie gives at a racetrack are not probabilities, since the bookie needs to make a Dutch book in his favour in order to make a living. But that’s an aside.

A individual person is not an individual event. A person is the site of many thousands of different events, from heartbeats and muscle twitches to things like crossing roads, dealing with customers, and eating food.

The statistics on food poisoning apply to a person because they eat many meals, each one of which is an event that might involve eating something that disagrees with them.

The statistics on divorce apply to an individual man because he has many opportunities to displease or disappoint his wife.

The twist is that most people will look at socio-economic variables to judge their probability of divorce, missing the point that nobody gets divorced because they are a prospering accountant, or a struggling session musician. They get divorced against because they do something that upsets their partner, and nobody ever records or measures those things. I suspect the correlation between social class, occupation and other such macro-variables, and actual divorce-causing behaviours, is actually fairly weak.

Statistics is difficult. The mathematics is surely horrible, but the conceptual difficulty is right at the start, in understanding how to model something, and how to apply the ideas.