Everything arises bottom-up

It occurs to me that, as we’ve come to understand things better, often a “top down” conception of how something arises has been replaced by a “bottom up” account.

An obvious example is political authority.  The Medieval concept of a God-appointed ruler issuing commands by divine right has been replaced by agreement that legitimate political authority arises bottom-upwards, from the consent of “we the people”.

Similarly, human rights are sometimes supposed to be absolute principles with which people are “endowed by their Creator”.  But, in reality they are collective agreements, deriving from human advocacy about how we want people to be treated, and thus resting only on their widespread acceptance.  Does that make them more insecure, more alienable?  Maybe (and perhaps that’s why some attempt to treat them as absolute and objective), but that’s all there is to it. 

It’s the same with the wider concept of morality. Many have sought to anchor morality in the solid foundation of either a divinity or objective reason.  But neither works: morality derives from human nature and human values. It bubbles up from each of us, leading to wider societal norms and expectations, rather than being imposed on us from outside. Some see that as producing only a second-rate morality, but wanting there to be an objective morality to which a supra-human authority will hold us doesn’t make such a scheme tenable. 

Likewise, principles of fairness and justice can only be rooted in human evaluations of what is fair or just.  There isn’t anything else, no objective scale against which we can read off a quantification of “justness” or “fairness”, any more than there is for moral “oughtness”. What we call “natural justice” is justice rooted in our human feelings of what is fair.  Beyond human society, nature is literally incapable of knowing or caring about concepts of “fairness”, “justice” or “morality”. These are human concepts arising from ourselves. 

And then there are concepts of meaning and purpose. Some argue that, without a God, there can be no meaning or purpose to life.  They tell us that, unless there is an afterlife, our lives are ultimately pointless. But the only forms of meaning and purpose that exists are the purposes that we create for ourselves and the meanings that we find in our lives. As thinking, feeling, sentient creatures we create purposes and we find things meaningful.  That they are local and time-limited doesn’t make them less real.  

But then sentience and consciousness also bubble up from below, forming out of patterns of non-sentient matter. These local and temporary patterns of material stuff arise as a product of evolution, that creates such patterns (“brains”) to do the job of facilitating survival and reproduction. 

It’s the same with intelligence. The top-down conception that the universe starts with intelligence, which dribbles down from there, is wrong.  Rather, intelligence bubbles up from non-intelligent precursors. Over evolutionary time, successive generations of animals developed greater capabilities to sense their environment, to process the information, and then compute a response.

Of course life itself is the same, arising out of non-life.  We’ve long ditched the dualistic notion of elan vital giving spark to inanimate matter.  Simple molecules can replicate because atoms of the same type act like each other, and so, in simple circumstances, simple collections of matter behave similarly. And it complicates from there as simple structures aggregate into complex ones. And when replicators get sufficiently complicated we call them “life”. 

The above traces social sciences into biology and into biochemistry and simple chemistry.  But maybe the same bottom-up approach also applies to physics.

Richard Feynman starts his Lectures by saying:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis, that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another.

And everything builds from there. 

But even atoms are built of particles, and as for what “particles” are, well we are still pretty unclear on what the ultimate ontology is.

How about causation? It’s a fundamental concept on the macroscopic scale that thing happen at time t+1 because of how things were at time t. But even that may be an  emergent property, since causation gets less clear at the microscopic scale. Quantum indeterminacy holds that things occur for no discernible reason. A virtual particle pair can just arise, with no proximate cause. 

Maybe the concept of time is similar. Special Relativity has long destroyed the idea that there is a time that is absolute and the same for everyone.  Maybe time bubbles up and emerges so that we can only talk sensibly about “time” at a macroscopic level.  Such speculations are beyond established physics, but are being advocated by Carlo Rovelli and others.

And lastly there is space.  Again, the conception of space as an inert, static backdrop in which everything else plays out has long been overturned. Relativity tells us that space is distorted and warped by matter, such that it can no longer be thought of a separate from the matter it interacts with.  Speculative theories suggest that space itself may be created at the local, particle level from the quantum entanglement of adjacent particles.   

All of which leaves me wondering whether there is anything left for which a top-down conception is still tenable.  And, further, does the bottom-up nature of physics at the particle level necessitate that all higher-level properties are emergent bottom-up creations? 

Edinburgh University should value academic enquiry above ideology

Jonathan Haidt declared that universities could either be about seeking truth or about seeking social justice, but not both. Nowadays, with academics in the humanities and social sciences skewing heavily left, many have adopted Marx’s dictum: “Philosophers have hitherto only interpreted the world in various ways; the point is to change it”.

And so it is that universities are increasingly declaring contentious and ideological notions as orthodoxy, and then demanding assent. This will only get worse unless people speak up against it, so, as a university academic, here goes:

Like many universities, the University of Edinburgh now has a unit dedicated to promoting “Equality, Diversity and Inclusion”. And who could be against any of those? Note, though, that while declaring that it “promotes a culture of inclusivity”, Edinburgh’s pages make no mention of diversity of ideas or about including those who challenge orthodoxy. And yet surely both of those are necessary in a truth-seeking university?

Edinburgh then seeks to instruct us in a set of catechisms under the heading “Educate yourself”. And note that these are not just the webpages of an advocacy group composed of students at the university, this is official Edinburgh University webpages carrying their imprimatur.

In bold type we are told that: “We need to understand that transphobia is every bit as unacceptable as racism and homophobia”. [Update: since this post was written the page has been taken down.]

So what is “transphobia”?, well: “Transphobia is the hatred, fear, disbelief, or mistrust of trans and gender non-conforming people.” Hatred or fear of trans people? Yes, that is transphobia, and yes that should be deplored. Everyone, on all sides of such debates, agrees that trans people should be treated with respect and enabled to live their lives in dignity and safety.

But “disbelief”? That suggests that you’re not allowed to disagree with claims made by trans people. So, if a trans activist asserts that “trans women are women”, and that they are “just as much women as any other woman”, then you are not allowed to reply: “Well actually, I consider trans women to be biological males who (in line with their genuine, innate nature) wish to live a female gender role”.

Asserting such things could well get you banned from Twitter, but surely they should be allowed in a truth-seeking university? After all, biological sex is real and important. When they transition, a trans person does not change sex, they change gender roles. And the fact of their biological sex can still matter in some areas of life (such as sport, where men are generally better than women so that the performance of elite women can roughly equate to that of the best 14- or 15-yr-old boys). And it really would be bad if a university had so lost sight of its truth-seeking role that it did not allow its members to say such things.

A university, as an employer, can reasonably request that — in work situations — members refer to each other using prefered pronouns. But it is quite wrong to demand adherence to anti-scientific ideology.

The orthodoxy-prescribing web pages continue:

In recent years, however, there has been a resurgence of transphobia in the mainstream and social media, which has fuelled increased transphobic hate incidents in society.

Notice how they give no citations or links to support this claim, as might be expected of university-imprimatur pages collected under the rubric “educate yourself”.

This has largely been linked to proposals to reform the 2004 Gender Recognition Act …

Again, this is assertion backed by no evidence. Have “transphobic hate incidents” really been linked to suggested changes to that Act?

Many people refer to these changes as giving trans people the right to ‘self-ID’, but it is, more correctly, a legally-binding ‘self-declaration’.

OK, though the distinction between “self-identification” and “self-declaration” is not clarified, and note that under self-ID the declaration is only “legally binding” until the next self-declaration.

This increased transphobia has been particularly severe for trans women, …

Again, no statistics, no evidence. All of this would be fine as the advocacy pages of a pressure group, but is such ideological contention really appropriate as the official voice of Edinburgh University?

… who have been the target of high-profile, celebrity campaigns that deny the trans experience …

The mention of “celebrity” is presumably a reference to J. K. Rowling, who Tweeted in support in Maya Forstater who was sacked for maintaining that biological sex is real. (And by the way, have Twitter really capped the number of “likes” on that Tweet at about 220,000? It seems so, and many Twitter users have reported that their “likes” mysteriously undo themselves.)

And no, no-one “denies the trans experience” in the sense of denying that some people do, as a very real part of their nature, identify with the gender role opposite to their biological sex, and may even feel themselves to be of the opposite sex. All that is being “denied” is that that actually makes them the other sex, and that a trans woman actually is “just as much a woman as any other woman”.

… and deliberately suggest trans women pose a threat to cis women by distorting statistics of male violence to imply it is a characteristic of trans women.

Note the accusation of “deliberate distortion” of statistics, an accusation not in any way substantiated. This webpage is a propaganda piece, not something that a university should put its name to.

And given mention of statistics one might expect some citations and links to what the statistics actually are. After all, some self-IDd trans women do pose a threat of sexual assault to women. And as far as I can tell (though I am open to correction on this) the statistics do seem to suggest that the propensity to sexual assault among trans women (that is, biological males) is more in line with that of men generally than that of women. And the rates of sexual assault by men are, of course, twenty times higher than those by women, so that matters.

For clarity, that is not suggesting that trans women are more likely to commit assault than men, but that their propensity is roughly in line with biological males generally, and thus much higher than that of women.

(And again, if Edinburgh have good-quality evidence that this is not the case then it would be helpful if their “educate yourself” pages linked to it. Because, you see, “educating” oneself is about familiarising oneself with actual evidence, not about uncritically imbibing ideology.)

A particular strand of feminism (gender-critical) voices concerns that recognising the rights of trans women will negatively impact the ‘sex-based’ rights of cis women …

It’s rather notable that they put “sex-based” in quotes. This is in line with radical “queer theory” that says that biological sex is not actually real, and that what is real is one’s inner “gender identity”. That is a highly contentious claim that flies in the face of science. Note also the loaded phrasing “recognising the rights of”, as though the rights being claimed were already established and agreed.

And it would seem that those gender-critical feminists have a fair point, don’t they? A doctrine that all that matters is a self-report of ones “gender identity” would indeed “impact the sex-based rights of women”, in those situations (such as women’s sports and women’s prisons) where women have a legitimate and reasonable expectation of sex-based segregation.

… and that predatory men will exploit the proposed right to self-declaration to access women-only spaces or to gain advantage in sports and the workplace. This effectively makes trans women the focus of blame for the actions of predatory men.

Well no, it makes the rules, the self-ID doctrine (not “trans women”) the focus of blame for the actions of predatory men. And again, it’s a fair point, isn’t it? Under self-ID, what is there to stop a predatory man adopting a trans persona in order to obtain ready access to women’s spaces? For that matter, what is there to stop a narcissistic man of mediocre sporting ability adopting the persona of a trans woman in order to play against rather easier competition, and so indulge his self-image as a winner?

Gender-critical feminists have also criticised trans women for perpetuating stereotypes of femininity, another example of harmful gate-keeping of another person’s gender presentation. […] none of us should be commenting on other people’s dress choices and external features or assuming gender identity on that basis …

But, once again, the gender-critical feminists have raised a fair point. If we are to deplore feminine stereotypes, so that we don’t associate “female gender” with styles of dress or mannerisms or appearance, and certainly not with activities or job roles, then what is “gender” actually about?

The whole point of trans ideology, of course, is that ones gender is not associated with biological sex or reproductive anatomy — and nor is it associated with any feminine stereotypes (perish the thought!) — so what is left? The trans activists can only answer that it is associated with an inner “gender identity” what we just know or “experience”.

But this makes no sense: if one does not anchor the terms “man” and “woman” in objectively real biological sex, then how can one even define the terms “man” and “woman”? The trans activists try the feeble: “a woman is anyone who experiences themselves as a woman”, but that just gives an endless recursion and answers nothing.

Again, this whole web page would be fair enough if it were the advocacy claims of a pressure group, but the role of a university and its academics would then be to carefully and dispassionately scrutinise each claim for truth and consistency — and yet here the claims are being presented as orthodoxy to which university members are expected to assent.

While concerns for women’s safety are valid, there is no evidence that trans women pose any more danger than other women.

Well, some studies claim such evidence — though I recognise that this link is to an advocacy group, but if Edinburgh have better evidence then perhaps they could present it? Isn’t that how proper discussion proceeds?

This type of ‘reasonable concern’ is used frequently by trans-hostile groups, such as ultra-right wing campaigners and certain feminists.

One notes the quotation marks around “reasonable concern”, implying that this is just a cover story. One notes the smear tactic of grouping feminists with the “ultra-right wing”. And yet, I’m willing to bet that the majority of moderate, centrist-minded people would accept such concerns as reasonable.

Another ‘reasonable concern’ is alarm at the increase in gender identity services for children, despite evidence that early support for individuals reduces psychological problems and suicide in later life.

And again, there are no citations or links to support that claim. And evidence dragged reluctantly out of the Tavistock clinic owing to the Keira Bell case is that pubery blockers make no overall difference to psychological well-being. This is not settled science, with the only studies having a small number of patients and lacking controls, so it should not be presented as though assent is mandatory.

There is considerable misinformation about what happens in gender identity clinics, deliberately circulated to create fear and moral panic.

Come on, there is no way that a university should have this sort of stuff on its official webpages. The UK high court has recently ruled that concerns about gender-identity clinics are well-justified and has halted puberty-blocking drug treatment in under-16s, calling it “experimental”.

Having said that, it is reasonable, in the face of so much misinformation and hostility, for people to have concerns and to seek information and reassurance. This is different to the use of ‘reasonable concerns’ by transphobic campaigners where accurate information is rejected or distorted in a similar way to the strategy of Islamophobes and anti-Semites.

Oh come on! This reads like an undergraduate activist having a tantrum. Sure, let the undergrads spout activist speak, but the official University Edinbugh pages should be written by grown-ups, especially if they are supposed to be statements of university policy.

Does quantum indeterminism defeat reductionism?

After writing a piece on the role of metaphysics in science, which was a reply to neuroscientist Kevin Mitchell, he pointed me to several of his articles including one on reductionism and determinism. I found this interesting since I hadn’t really thought about the interplay of the two concepts. Mitchell argues that if the world is intrinsically indeterministic (which I think it is), then that defeats reductionism. We likely agree on much of the science, and how the world is, but nevertheless I largely disagree with his article.

Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system. Thus, suppose we had a Star-Trek-style transporter device that knew only about (but everything about) low-level atoms and molecules and their positions. We could use it to duplicate a leopard, and the duplicated leopard would manifest the same high-level behaviour (“stalking an antelope”) as the original, even though the transporter device knows nothing about high-level concepts such as “stalking” or “antelope”.

As an aside, philosophers might label the concept I’ve just defined as “supervenience”, and might regard “reductionism” as a stronger thesis about translations between the high-level concepts such as “stalking” and the language of physics at the atomic level. But that type of reductionism generally doesn’t work, whereas reductionism as I’ve just defined it does seem to be how the world is, and much of science proceeds by assuming that it holds. While this version of reductionism does not imply that explanations at different levels can be translated into each other, it does imply that explanations at different levels need to be mutually consistent, and ensuring that is one of the most powerful tools of science.

Our second concept, determinism, then asserts that if we knew the entire and exact low-level description of a system at time t  then we could — in principle — compute the exact state of the system at time t + 1. I don’t think the world is fully deterministic. I think that quantum mechanics tells us that there is indeterminism at the microscopic level. Thus, while we can compute, from the prior state, the probability of an atom decaying in a given time interval, we cannot (even in principle) compute the actual time of the decay. Some leading physicists disagree, and advocate for interpretations in which quantum mechanics is deterministic, so the issue is still an open question, but I suggest that indeterminism is the current majority opinion among physicists and I’ll assume it here.

This raises the question of whether indeterminism at the microscopic level propagates to indeterminism at the macrosopic level of the behaviour of leopards. The answer is likely, yes, to some extent. A thought experiment of coupling a microscopic trigger to a macroscopic device (such as the decay of an atom triggering a device that kills Schrodinger’s cat) shows that this is in-principle possible. On the other hand, using thermodynamics to compute the behaviour of steam engines (and totally ignoring quantum indeterminism) works just fine, because in such scenarios one is averaging over an Avogadro’s number of partlces and, given that Avogadro’s number is very large, that averages over all the quantum indeterminicity.

What about leopards? The leopard’s behaviour is of course the product of the state of its brain, acting on sensory information. Likely, quantum indeterminism is playing little or no role in the minute-by-minute responses of the leopard. That’s because, in order for the leopard to have evolved, its behaviour, its “leopardness”, must have been sufficiently under the control of genes, and genes influence brain structures on the developmental timescale of years. On the other hand, leopards are all individuals. While variation in leopard brains derives partially from differences in that individual’s genes, Kevin Mitchell tells us in his book Innate that development is a process involving much chance variation. Thus quantum indeterminicity at a biochemical level might be propogating into differences in how a mammal brain develops, and thence into the behaviour of individual leopards.

That’s all by way of introduction. So far I’ve just defined and expounded on the concepts “reductionism” and “determinism” (but it’s well worth doing that since discussion on these topics is bedeviled by people interpreting words differently). So let’s proceed to why I disagree with Mitchell’s account.

He writes:

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain.

There is never only one explanation of anything. We can always give multiple different explanations of a phenomenon — certainly for anything at the macroscopic level — and lots of different explanations can be true at the same time, so long as they are all mutually consistent. Thus one explanation of a leopard’s stalking behaviour will be in terms of the firings of neurons and electrical signals sent to muscles. An equally true explanation would be that the leopard is hungry.

Reductionism does indeed say that you could (in principle) reproduce the behaviour from a molecular-level calculation, and that would be one explanation. But there would also be other equally true explanations. Nothing in reductionism says that the other explanations don’t exist or are invalid or unimportant. We look for explanations because they are useful in that they enable us to understand a system, and as a practical matter the explanation that the leopard is hungry could well be the most useful. The molecular-level explanation of “stalking” is actually pretty useless, first because it can’t be done in practice, and second because it would be so voluminous and unwieldy that no-one could assimilate or understand it.

As a comparison, chess-playing AI bots are now vastly better than the best humans and can make moves that grandmasters struggle to understand. But no amount of listing of low-level computer code would “explain” why sacrificing a rook for a pawn was strategically sound — even given that, you’d still have all the explanation and understanding left to achieve.

So reductionism does not do away with high-level analysis. But — crucially — it does insist that high-level explanations need to be consistent with and compatible with explanations at one level lower, and that is why the concept is central to science.

Mitchell continues:

In a deterministic system, whatever its current organisation (or “initial conditions” at time t) you solve Newton’s equations or the Schrodinger equation or compute the wave function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the system. There’s no why involved. It doesn’t matter what any of the states mean or why they are that way – in fact, there can never be a why because the functionality of the system’s behaviour can never have any influence on anything.

I don’t see why that follows. Again, understanding, and explanations and “why?” questions can apply just as much to a fully reductionist and deterministic system. Let’s suppose that our chess-playing AI bot is fully reductionist and deterministic. Indeed they generally are, since we build computers and other devices sufficiently macroscopically that they average over quantum indeterminacy. That’s because determinism helps the purpose: we want the machine to make moves based on an evaluation of the position and the rules of chess, not to make random moves based on quantum dice throwing.

But, in reply to “why did the (deterministic) machine sacrifice a rook for a pawn” we can still answer “in order to clear space to enable the queen to invade”. Yes, you can also give other explanations, in terms of low-level machine code and a long string of 011001100 computer bits, if you really want to, but nothing has invalidated the high-level answer. The high-level analysis, the why? question, and the explanation in terms of clearing space for the queen, all still make entire sense.

I would go even further and say you can never get a system that does things under strict determinism. (Things would happen in it or to it or near it, but you wouldn’t identify the system itself as the cause of any of those things).

Mitchell’s thesis is that you only have “causes” or an entity “doing” something if there is indeterminism involved. I don’t see why that makes any difference. Suppose we built our chess-playing machine to be sensitive to quantum indeterminacy, so that there was added randomness in its moves. The answer to “why did it sacrifice a rook for a pawn?” could then be “because of a chance quantum fluctuation”. Which would be a good answer, but Mitchell is suggesting that only un-caused causes actually qualify as “causes”. I don’t see why this is so. The deterministic AI bot is still the “cause” of the move it computes, even if it itself is entirely the product of prior causation, and back along a deterministic chain. As with explanations, there is generally more than one “cause”.

Nothing about either determinism or reductionism has invalidated the statements that the chess-playing device “chose” (computed) a move, causing that move to be played, and that the reason for sacrificing the rook was to create space for the queen. All of this holds in a deterministic world.

Mitchell pushes further the argument that indeterminism negates reductionism:

For that averaging out to happen [so that indeterminism is averaged over] it means that the low-level details of every particle in a system are not all-important – what is important is the average of all their states. That describes an inherently statistical mechanism. It is, of course, the basis of the laws of thermodynamics and explains the statistical basis of macroscopic properties, like temperature. But its use here implies something deeper. It’s not just a convenient mechanism that we can use – it implies that that’s what the system is doing, from one level to the next. Once you admit that, you’ve left Flatland. You’re allowing, first, that levels of reality exist.

I agree entirely, though I don’t see that as a refutation of reductionism. At least, it doesn’t refute forms of reductionism that anyone holds or defends. Reductionism is a thesis about how levels of reality mesh together, not an assertion that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels.

Indeterminism does mean that we could not fully compute the exact future high-level state of a system from the prior, low-level state. But then, under indeterminism, we also could not always predict the exact future high-level state from the prior high-level state. So, “reductionism” would not be breaking down: it would still be the case that a low-level explanation has to mesh fully and consistently with a high-level explanation. If indeterminacy were causing the high-level behaviour to diverge, it would have to feature in both the low-level and high-level explanations.

Mitchell then makes a stronger claim:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate. And a different set of microstates that corresponds to a different macrostate. If the evolution of the system depends on those coarse-grained macrostates (rather than on the precise details at the lower level), then this raises something truly interesting – the idea that information can have causal power in a hierarchical system …

But there cannot be a difference in the macrostate without a difference in the microstate. Thus there cannot be indeterminism that depends on the macrostate but not on the microstate. At least, we have no evidence that that form of indeterminism actually exists. If it did, that would indeed defeat reductionism and would be a radical change to how we think the world works.

It would be a form of indeterminism under which, if we knew everything about the microstate (but not the macrostate) then we would have less ability to predict time t + 1  than if we knew the macrostate (but not the microstate). But how could that be? How could we not know the macrostate? The idea that we could know the exact microstate at time t  but not be able to compute (even in principle) the macrostate at the same time t  (so before any non-deterministic events could have happened) would indeed defeat reductionism, but is surely a radical departure from how we think the world works, and is not supported by any evidence.

But Mitchell does indeed suggest this:

The low level details alone are not sufficient to predict the next state of the system. Because of random events, many next states are possible. What determines the next state (in the types of complex, hierarchical systems we’re interested in) is what macrostate the particular microstate corresponds to. The system does not just evolve from its current state by solving classical or quantum equations over all its constituent particles. It evolves based on whether the current arrangement of those particles corresponds to macrostate A or macrostate B.

But this seems to conflate two ideas:

1) In-principle computing/reproducing the state at time t + 1 from the state at time t (determinism).

2) In-principle computing/reproducing the macrostate at time t from the microstate at time t (reductionism).

Mitchell’s suggestion is that we cannot compute: {microstate at time t } ⇒ {macrostate at time t + 1 }, but can compute: {macrostate at time t } ⇒ {macrostate at time t + 1 }. (The latter follows from: “What determines the next state … is [the] macrostate …”.)

And that can (surely?) only be the case if one cannot compute: {microstate at time t } ⇒ {macrostate at time t }, and if we are denying that then we’re denying reductionism as an input to the argument, not as a consequence of indeterminism.

Mitchell draws the conclusion:

In complex, dynamical systems that are far from equilibrium, some small differences due to random fluctuations may thus indeed percolate up to the macroscopic level, creating multiple trajectories along which the system could evolve. […]

I agree, but consider that to be a consequence of indeterminism, not a rejection of reductionism.

This brings into existence something necessary (but not by itself sufficient) for things like agency and free will: possibilities.

As someone who takes a compatibilist account of “agency” and “free will” I am likely to disagree with attempts to rescue “stronger” versions of those concepts. But that is perhaps a topic for a later post.

2020: A tale of two deaths

Amidst the high numbers of Covid-related deaths in 2020, two non-Covid deaths have stood out:

One incident started when a black, 46-yr-old male attempted to pass a fake 20-dollar bill (he had previous convictions for crimes including armed robbery). The shopkeeper judged that the man was high on drugs, and, being concerned that he was unfit to drive, called the police. Floyd resisted arrest, wrestling with the officers, and refusing to get in the police car while repeatedly saying “I can’t breath”. At that point he was not being restrained, so whether the words related to the effect of drugs, or were part of a charade to evade arrest, is unclear. Eventually he fell to the ground and was kept there by a policeman placing his knee on Floyd’s neck, while he continued to repeat “I can’t breath”. He suffered a heart attack and died. Continue reading

Science does not rest on metaphysical assumptions

It’s a commonly made claim: science depends on making metaphysical assumptions. Here the claim is being made by Kevin Mitchell, a neuroscientist and author of the book Innate: How the Wiring of Our Brains Shapes Who We Are (which I recommend).

His Twitter thread was in response to an article by Richard Dawkins in The Spectator:

Dawkins’s writing style does seem to divide opinion, though personally I liked the piece and consider Dawkins to be more astute on the nature of science than he is given credit for. Mitchell’s central criticism is that Dawkins fails to recognise that science must rest on metaphysics: Continue reading

Eddington did indeed validate Einstein at the 1919 eclipse

You’re likely aware of the story. Having developed General Relativity, a theory of gravity that improved on Newton’s account, Einstein concluded that the fabric of space is warped by the presence of mass and thus that light rays will travel on distorted paths, following the warped space. He then predicted that this could be observed during a solar eclipse, when the apparent position of stars near the sun would be distorted by the mass of the sun. Britain’s leading astronomer, Arthur Eddington, set out to observe the 1919 solar eclipse, and triumpantly confirmed Einstein’s prediction. The story then made the front pages of the newspapers, and Einstein became a household name.

You’re also likely aware of the revisionist account. That the observations acquired by Eddington were ambiguous and inconclusive, and that he picked out the subset of measurements that agreed with Einstein’s prediction. Thus Eddington’s vindication of Einstein was not warranted on the data, but was more a “social construction”, arrived at because Eddington wanted Einstein’s theory to be true. Thus Einstein’s fame resulted, not from having developed a superior theory, but from the approval of the high-status Eddington.

The story is often quoted in support of the thesis that science — far from giving an objective model of reality — is just another form of socially-constructed knowledge, with little claim to be superior to other “ways of knowing”. Even those who may grant that science can attain some degree of objectivity can point to such accounts and conclude that the acceptance of scientific ideas is far more about the social status of their advocates than is commonly acknowleged.

Albert Einstein and Arthur Eddington

A new paper by Gerry Gilmore and Gudrun Tausch-Pebody reports a re-analysis of the data and a re-evaluation of the whole story. Their conclusion, in short, is that Eddington’s analysis was defendable and correct. Where he placed more credence on some observations than others he was right to do so, and the measurements really did favour Einstein’s value for the deflection of the stars’ positions.

Thus, the later revisionist account by philosophers John Earman and Clark Glymour, taken up in accounts of science such as The Golem by Harry Collins and Trevor Pinch, are unfair to Eddington.

Images on the 1919 Solar eclipse. Faint stars are marked.

Gilmore and Tausch-Pebody say in their article:

Earman and Glymour conclude: “Now the eclipse expeditions confirmed the theory only if part of the observations were thrown out and the discrepancies in the remainder ignored; Dyson and Eddington, who presented the results to the scientific world, threw out a good part of the data and ignored the discrepancies. This curious sequence of reasons might be cause enough for despair on the part of those who see in science a model of objectivity and rationality.”

Our re-analysis shows that these strong claims are based entirely on methodological error. Earman and Glymour failed to understand the difference between the dispersion of a set of measurements and an uncertainty, random plus systematic, on the value of the parameter being measured. They speculated but did not calculate, and their conclusions are not supported by evidence.

Their error was left unchallenged and the strong conclusions and accusations they derived from it were used not only to question the scientific method then applied, but also to undermine the scientific integrity and reputation of an eminent scientist.

The crucial observations came from two different telescopes, a 4-inch telescope at Sobral, in Brazil, and an astrograph sent to Principe Island, off West Africa. Einstein’s theory of gravity predicted a deflection (for a star at the sun’s limb) of 1.75 arcsecs, while a calculation based on Newtonian gravity predicted half that value, 0.87 arcsecs.

Gilmore and Tausch-Pebody present the table below, listing the measured deflection, and how much it differed from the Einsteinian, Newtonian and zero-deflection models. The z value is the difference, in units of the measurement’s error bar, and P(z) is the probability of obtaining that measurement, were the model correct. The data clearly prefer Einstein’s value for the deflection.

Observations were also made with a third instrument, an astrograph taken to Sobral. However, the resulting images were “diffused and apparently out of focus”, resulting in a systematic error that was large and unquantifiable. Crucially, being unable to evaluate the systematic distortion, the observers could not arrive at a proper uncertainty estimate for these data points, without which they could not be combined with the measurements from the other two telescopes.

Gilmore and Tausch-Pebody conclude:

The original 1919 analysis is statistically robust, with conclusions validly derived, supporting Einstein’s prediction. The rejected third data set is indeed of such low weight that its suppression or inclusion has no effect on the final result for the light deflection, though the very large and poorly quantified systematic errors justify its rejection.

Scientists, being human, are of course fallible and prone to bias. To a large extent they are aware of that, which is why techniques such as double-blinded controlled trials are routinely adopted. And in some areas, such as the replication crisis in psychology, scientists have certainly not been careful enough. But, overall, it does seem that science succeeds in overcoming human fallibility, and that the consensus findings arrived at are more robust than critics sometimes allow.

Definitions, loyalty oaths, anti-Semitism and Islamophobia

Gavin Williamson, the UK’s Secretary of State for Education, is demanding that universities sign up to the International Holocaust Remembrance Alliance (IHRA) definition of anti-Semitism:

A “definition” is (quoting Oxford Dictionaries) an “explanation of the meaning of a word or phrase”. And dictionaries give clear and succinct definitions of anti-Semitism: Continue reading

Is peer review biased against women?

In astrophysics, time-allocation committees allocate the time available on major telescopes and satellites. Facilities such as the Hubble Space Telescope are often over-subscribed by factors of 4 to 10, and writing proposals that win time in peer review is crucial to people’s careers.

I’ve recently had my first experience of serving on a “dual-anonymous” time-allocation committee, in which all the proposals are reviewed without knowing the proposers’ names.

The trend to dual-anonymous review started with an analysis of 18 years of proposals to use Hubble. The crucial data showed that, over more than a decade, the success rate for proposals with a woman as principal investigator was lower (at 19%) than for proposals led by men (at 23%). That 20% difference in success rate then disappeared when proposals were reviewed without knowing the proposer’s names. Continue reading

Are predictions an essential part of science?

Theoretical physicist Sabine Hossenfelder recently wrote that that “predictions are over-rated” and that one should instead judge the merits of scientific models “by how much data they have been able to describe well, and how many assumptions were needed for this”, finishing with the suggestion that “the world would be a better place if scientists talked less about predictions and more about explanatory power”.

Others disagreed, including philosopher-of-science Massimo Pigliucci who insists that “it’s the combination of explanatory power and the power of making novel, ideally unexpected, and empirically verifiable predictions” that decides whether a scientific theory is a good one. Neither predictions nor explanatory powers, he adds, are sufficient alone, and “both are necessary” for a good scientific theory. Continue reading

Scientism: Part 4: Reductionism

This is the Fourth Part of a review of Science Unlimited? The Challenges of Scientism, edited by Maarten Boudry and Massimo Pigliucci. See also Part 1: Pseudoscience, Part 2: The Humanities, and Part 3: Philosophy.

Reductionism is a big, bad, bogey word, usually uttered by those accusing others of holding naive and simplistic notions. The dominant opinion among philosophers is that reductionism does not work, whereas scientists use reductionist methods all the time and see nothing wrong with doing so.

That paradox is resolved by realising that “reductionism” means very different things to different people. To scientists it is an ontological thesis. It says that if one exactly replicates all the low-level ontology of a complex system, then all of the high-level behaviour would be entailed. Thus there cannot be a difference in high-level behaviour without there being a low-level difference (if someone is thinking “I fancy coffee” instead of “I fancy tea”, then there must be a difference in patterns of electrical signals swirling around their neurons). Continue reading