Quantcast
Channel: The Brains Blog — Neuroscience
Viewing all articles
Browse latest Browse all 77

Empirically-Informed Approaches to Weakness of Will: A Brains Blog Roundtable

$
0
0

Weakness of will is a traditional puzzle in the philosophy of action. The puzzle goes something like this: 

FOLK PSYCHOLOGICAL THEORY: If, at time t, an agent judges that it is better to do A than B, and she believes she is free to do A, then, provided she tries to do either at that time, she will try to do A and not B.

WEAKNESS OF WILL: An agent judges that it is better to do A than B, believes that she is free to do A, but tries to do B.

But taken together, these statements are inconsistent. FOLK PSYCHOLOGICAL THEORY precludes the possibility of weakness of will (as characterized in WEAKNESS OF WILL), but WEAKNESS OF WILL asserts that it occurs. So can WEAKNESS OF WILL be possible, and if so, how? 

Philosophers since Davidson have approached the puzzle of weakness of will from the perspective of philosophical folk psychology. As in FOLK PSYCHOLOGICAL THEORY, philosophical folk psychology refers to philosophical theories describing human behaviors in terms of mental states such as intentions, beliefs, and so on (Lewis 1972, Stich and Nichols 2003).These theories are broadly realist in nature: they hold that people really experience mental states such as beliefs and desires. They further hold that our everyday descriptions of these mental states are roughly true. These analyses precisify and systematize these descriptions to develop full-fledged theoretical accounts of action. 

There are numerous metaphysically- and pragmatically-oriented reasons for continuing to work within the framework of philosophical folk psychology. The metaphysically-oriented reasons are widely defended – as are criticisms of these reasons – and emphasize, among other features, philosophical folk psychology’s unsurpassed predictive power. The pragmatically-oriented reasons include, for example, the fact that no matter how theoretically dissatisfying one finds philosophical folk psychology, or how theoretically dissatisfying one expects to find it in the future, it is reasonable to continue working with the framework while it remains the best and most extensive account of action on offer.  

Still, even if we think of philosophical folk psychology as the truest theory of action currently available, we can and perhaps should remain open to the possibility that it may need to be refined or revised in the future. In fact, we probably should expect to refine or revise our concepts of what it means to ‘believe,’ ‘desire,’ and so on. This is where computational and empirical theories come in. We can use our best available computational and empirical theories to inform and even constrain our philosophical folk psychological theories and, by extension, our philosophical folk psychological theories of puzzles such as that of weakness of will.

This is the approach the authors participating in this roundtable have taken here. There are a variety of views on offer (in alphabetical order): 

I propose to replace the philosophical folk psychological notion of desire with the technical notions of reward and value, drawn from reinforcement learning. I then use these notions to argue that weakness of will is not only possible, but that there are in fact multiple kinds of weakness of will

Nora Heinzelmann argues that delay discountingtheory offers a powerful model for weak-willed behavior, describing and predicting how an agent’s preferences change over time and at what point they will reverse. 

Neil Levy defends a judgment shiftaccount of weakness of will modeled on the dysregulation of the mid-brain dopaminergic system in drug addiction. 

Agnes Moors defends a dual process model with a parallel-competitive architecture, based on the idea that stimulus-driven and goal-directed processes can both be automatic.

Chandra Sripada endorses a robust faculty of Will, arguing that it is the only theoretical l view with the resources to explain the phenomenon of weakness of will. He then argues that, as a matter of empirical fact, we in fact have a robust faculty of Will as a central part of our psychology.

Zina Ward takes a critical approach, noting that a view which depends on the partitioning of the mind “is only as solid as the partitions it relies on.” Ward further raises an important consideration regarding the puzzling and/or irrational nature of weakness of will, asking, “Is it possible for naturalists to preserve the “puzzle” of weakness of will? And should we even try?” 

We are excited to discuss these views here at the Brains Blog. You can read each contribution by clicking on the author’s name below. Thanks to everyone for participating!

***

Julia Haas:

Reward, Value, and Weakness of Will

In my overview to the roundtable, I suggested that there are good, pragmatic reasons for both continuing to work within the framework of philosophical folk psychology and for selectively refining and revising it. In my more focused contribution here, I propose that we make just such a selective revision by replacing the philosophical folk psychological notion of desire with the technical notions of reward and value. I argue that once we adopt such a framework, we can show that weakness of will is not only possible, but that there are actually several kinds of weakness of will.

Making the change

Timothy Schroeder (2004) proposes that we explain the ‘essence’ of the philosophical folk psychological theory of desire in terms of reward learning. On this reward-based view, “to have an intrinsic (positive) desire that P is to use the capacity to perceptually or cognitively represent that P to constitute P as a reward,” where the concept of reward is used in the sense of reinforcement learning and associated branches of computational neuroscience (henceforth, the decision sciences) (2004, p.131).

My proposed amendment to philosophical folk psychology holds the same theoretical commitments as Schroeder’s reward-based theory: desire is expressed in terms of the neuroscientific notions of reward and value. But my amendment goes a step further. Rather than continuing to ‘nest’ the notions of reward and value within the notion of desire, it proposes that we instead replace the philosophical folk psychological notion of desire with the technical notions of reward and value. Once we do so, we can draw on these notions directly, and thereby explicitly harness their explanatory power to help address puzzles and debates in the philosophy of action.

Developing the view

Building on Schroeder’s (2004) view, then, I propose to recast deliberation and choice, traditionally expressed in terms of beliefs and desires, in terms of reward and value instead. Here, reward is defined as the intrinsic desirability of a given stimulus. Value, for its part, is defined as the total, expected, future reward associated with a given state. On this framework, an agent’s goal is to find an optimal policy that allows her to maximize value through interactions with her environment. The major achievement of the decision sciences over the past several decades has been to elucidate specific computational strategies that allow an agent to discover and use such value-maximizing policies.

Presently, evidence suggests that the mind relies on at least three such computational strategies and, by extension, three semi-autonomous decisions systems for choice and action. The hardwired (‘Pavlovian’) system relies on automatic approach and withdrawal responses to appetitive and aversive stimuli, respectively. The habitual (‘model-free’) system gradually learns and caches positive and negative state-action pairs. And, the deliberative (‘model-based’) system explicitly represents and selects from possible state-action pairs, often described in terms of a decision tree. For a more detailed discussion of these systems, see here.

How do these multiple systems interact? To start, each of the systems partially evaluates the action alternatives. Simultaneously, each system generates an estimate of how accurate its prediction is relative to the decision problem at hand. For example, the habitual system typically coordinates choice in familiar, complex settings, because, due to its caching procedure, it typically has a higher accuracy profile in familiar decision problems, even if it predicts a lower overall value than do either its deliberative or hardwired counterparts. By contrast, the deliberative system typically coordinates choice in novel, high-risk settings, because it can explicitly represent the different alternatives, even if it predicts a lower overall value than do either its habitual or hardwired counterparts.

These estimates, or accuracy profiles, are then compared, and the system with the highest accuracy profile is selected to direct the corresponding valuation task. These interactions are thus thought to be governed by an ‘accuracy-based’ Principle of Arbitration:

PA: Following partial evaluation, the system with the highest accuracy profile, i.e., that system most likely to provide an accurate prediction of expected value, relative to the decision problem at hand, directs the corresponding assessment of value (Daw et al. 2005, Lee et al. 2014).

Notably, according to PA, the choice of which system is used in a given context depends on its accuracy profile, and not on its prediction of value. One consequence of this is than an agent can assess an action A as being preferable to action B, but still do B – a feature that will be important to explaining how weakness of will is possible.

Applying the revised view to the puzzle of weakness of will Once we adopt the reward- and value-based account, we discover that we can show weakness of will is not only possible, but that there are actually multiple kinds of weakness of will, elicited by interactions between the different decision systems. (I present two of these kinds here – for a more detailed discussion, see Haas 2018).

Habitual weakness of will is elicited by interactions between the deliberative and habitual systems. Recall from above that the deliberative system typically has a higher reliability measure in novel settings, since its capacity for representation allows it to predict the values of various outcomes. By contrast, the habitual system is typically more reliable in complex but familiar circumstances, where representation would be both taxing and redundant. But it is not unusual for an important aspect of a familiar situation to change. Such circumstances elicit the most basic and harmless type of weakness of will.

If an agent opts for the typically more reliable but in fact inaccurate habitual system, she experiences habitual weakness of will. The information provided by the deliberative system enables the agent to know what the best course of action would be under these recently changed circumstances. Yet since the situation is broadly familiar, the habit-based approach has a high past cumulative success rate, or an overall high reliability measure. Thus, PA allocates the habitual system for action selection. Hence, the agent is aware of the most up- to-date and appropriate course of action in advance, but falls back on her less beneficial, habitual counterpart (for an extended discussion, see Daw et al. 2005). The agent experiences the signature phenomenology of weakness of will: she recognizes that it would be preferable to do A, but feels herself choosing to do B. Habitual weakness of will accounts for several paradigm cases of weakness of will, including Davidson’s classic example of brushing his teeth. Davidson describes lying in bed at night and realizing that he’s forgotten to brush his teeth. All things considered, he thinks to himself, it would be better just to stay in bed and get a good night’s sleep; but he gets out of bed and goes to brush his teeth anyway (1970, 30). Here, the move to reward and value makes sense of the otherwise perplexing action: the habitual system dictates that brushing one’s teeth is a reliably valuable course of action, even though the circumstances make it the less valuable action overall.

In pruning-based weakness of will, by contrast, the deliberative and hardwired systems interact to issue a suboptimal choice. This second type of interaction occurs when an option represented in the deliberative system’s decision tree elicits either a positive or negative hardwired response. For example, a strongly positive alternative, represented early on the decision tree, can cause the entire opposing branch of the tree to be ‘pruned,’ rejected, so that it is no longer considered (Huys et al. 2012). Conversely, a strongly negative alternative, represented at an early node of the decision tree, may cause the entire subsequent branch of the tree to be pruned, so that none of the subsequent values are computed or represented (Huys et al. 2012).

This kind of pruning-based weakness of will can account for another classic case of weakness of will, described by J.L. Austin (1956/7, 198). Austin begins to represent his options in the form of a decision tree, consisting of the ‘eat’ and ‘don’t eat’ alternatives. The full tree would represent the ensuing consequences of both alternatives. But the highly appealing nature of the bombe, represented early in the tree, engages the hardwired system and causes the tree to be pruned, such that the non-bombe alternatives are no longer considered. Austin thus represents the negative consequences of the only remaining alternative – eating the bombe – but has no other alternatives left to pursue. He eats the bombe. The specific phenomenal features of Austin’s experience are also accounted for. Although it is the product of the hardwired system, pruning-based weakness of will needn’t be rushed or impulsive. Rather, the pruning of the decision tree simply eliminates certain choices, and thereby leaves the less optimal alternative to be pursued “with calm and even with finesse.”

Discussion

The key thing to notice is that, in both of these cases, we can explain weakness of will without arriving at a puzzle represented by the inconsistent statements. We can generalize this observation. Recall from the overview that the original puzzle was framed by the following claim:

FOLK PSYCHOLOGICAL THEORY: If, at time t, an agent judges that it is better to do A than B, and she believes she is free to do A, then, provided she tries to do either at that time, she will try to do A and not B.

But in light of our shift from desires to rewards and value, we can now revise this claim so that it reads:

REVISED PFP THEORY: If, at time t, an agent’s decision system D values some option A more highly than some option B, he believes he is free to do A, and PA allocates D for action-selection, then if he tries to do either at that time, he will try to do A and not B. Notice, though, that if we now add…

WEAKNESS OF WILL: In cases like Gene’s above, an agent judges that it is best to do A at t, believes he is free to do A at t, but, despite trying to do something, does not try to do A at t.

…then the two statements are no longer inconsistent. On the plausible assumption that judgment is underwritten by the deliberative system, weakness of will occurs in any case in which PA allocates either the hardwired or habitual systems for action selection.

In this way, a careful revision to philosophical folk psychology allows us to arrive at a novel understanding of the nature – and kinds – of weakness of will.

References

Austin, J.A. (1956/57). A plea for excuses. In Austin (1979), 175-204.

Austin, J.A. (1979). Philosophical papers, 3rd ed., J. O. Urmson and G. J. Warnock (eds.), Oxford: Oxford University Press

Davidson, D. (1970). How is weakness of the will possible?. In Davidson (1980), 21-42.

Davidson, D. (1980). Essays on actions and events. Oxford: Clarendon Press.

Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8(12), 1704-1711.

Huys, Q. J., Eshel, N., O’Nions, E., Sheridan, L., Dayan, P., & Roiser, J. P. (2012). Bonsai trees in your head: how the Pavlovian system sculpts goal-directed choices by pruning decision trees. PLoS Computational Biology, 8(3).

Lee, S. W., Shimojo, S., & O’Doherty, J. P. (2014). Neural computations underlying arbitration between model-based and model-free learning. Neuron, 81(3), 687-699.


Nora Heinzelmann:

Delay discounting and weakness of will

Delay discounting theory has been widely used in the empirical sciences as a model for weakness of the will, and philosophers have followed suit (Zheng 2001, Mele 1987). In the following, I shall point out some limitations of this approach, arguing that although delay discounting theory cannot capture certain cases of weak-willed action, it is a powerful model for many core cases.

1. Delay discounting as a model for weakness of the will

Delay discounting theory was initially developed within classical economic theory (Samuelson 1937) whose framework axiomatically assumes that preference, choice, and (expected) value or utility [1] are congruent (von Neumann & Morgenstern 1953 [1944], Becker 1976, Steele & Stef ́ansson 2016). Very roughly, when presented with several options or prospects, an agent prefers one over another iff it is more desirable or choice-worthy [2]. Preference is thus a relation between options. Furthermore, the framework assumes that the agent chooses the most preferred option, which is reflected in their behaviour and thus empirically measurable. In typical circumstances, a utility function numerically represents the preference relation. This function maps utility or value onto options.

Delay discounting is the change of preference concerning a prospect, and thus of its expected value, with its temporal delay: typically, the more delayed the reward is, the lower is its discounted value, and, other things being equal, an agent chooses the earlier of two delayed rewards.

But the agent may discount different rewards with different rates (e.g., she might discount food more steeply than gold) or assign different delays to them, i. e., she might expect to get one reward earlier than another. Hence the relative preference between two rewards may change over time as the delay elapses. In some situations, this can lead to a preference reversal: whilst the agent prefers A over B at some point in time, she prefers B over A at another time.

Let us now see how this approach can model weak-willed behaviour. A classic example for weakness of the will is yielding to the temptation of delicious but unhealthy food (e. g., Aristotle, Nicomachean Ethics 1147a31–1147b6, Mele 2012, p. 37). Assume an agent knows that she will ruin a good night’s sleep if she overeats at dinner. Imagine she anticipates that she will be tempted to overeat tonight and therefore resolves to skip dessert. Facing the two options of having dessert or foregoing it for a good night’s sleep, she thus prefers the latter, she values it more, and she would choose it; i. e., if she were to order dinner now, she would not order dessert. Within our discounting framework, the discounted value of the dessert is lower than that of the discounted value of a good night’s sleep.

But now the delays elapse and dinnertime arrives. The agent is suddenly very tempted to have dessert. Imagine she orders it after all, knowing that she is thereby sacrificing a good night’s sleep. Within the discounting framework, the expected value of the dessert is now greater than that of the sound sleep. This is possible because sleeping well is more delayed than enjoying dessert. Moreover, the agent probably discounts food and sleep at different rates, i. e., she discounts the dessert less steeply than the sleep. Hence she reverses her preferences and gives in to temptation; she performs a weak-willed action.

2. Limitations

Delay discounting theory is extremely powerful because it cannot only model but also predict choice with econometric precision. Not surprisingly, it has been used to model a wide variety of weak-willed behaviour in different disciplines, from dieting to procrastination to addiction (Ainslie 2001, Kirby, Petry & Bickel 1999). However, this approach also has at least three limitations. I shall take them in turn.

First, recall that choice, value (utility) and preference are intimately linked within the economic framework of delay discounting theory. That is, it is axiomatically impossible to describe weakness of will in any of the following ways:

1. An agent chooses A over B but values B more than A .

2. An agent prefers A over B but chooses B over A .

3. An agent values A more than B but prefers B over A .

Thus, philosophers relying on evidence that presupposes the economic framework should refrain from describing weakness of will in any of those three or similar ways. This seems to be primarily a terminological issue.

Second and relatedly, delay discounting theory is not suitable to model what we may call instantaneous weakness of the will, where an agent chooses one option and at the same time also judges that it would be better to do something else. For instance, imagine our agent summons a waiter to order dessert but simultaneously tells her friends that she really prefers a good night’s sleep. Such a case seems almost inconceivable from the perspective we are considering here. It seems we would have to deny that both actions are genuine: either the agent is not wholeheartedly ordering dessert, or her utterance is not sincere.

Third, there is a more technical issue with the delay discounting model as we have conceived of it so far. Take the classic ‘marshmallow case’: a child is presented with a choice between either having one marshmallow immediately or waiting for a second one to arrive (Mischel & Ebbesen 1970, Mischel, Shoda & Rodriguez 1989). Imagine the child resolves to wait, indicating that she values two marshmallows later more than one now. However, after having waited for a while, she gives in to temptation and eats the one marshmallow. This reveals that she prefers one marshmallow now over two marshmallows later. But if that is so, she should not initially have started to wait, when the delay was even greater. It seems as if the child’s impatience increased over time. Delay discounting theory does not permit this: a delayed reward becomes more valuable as the delay elapses, not less. As a result, researchers have proposed to replace classical delay discounting models with more complex ones. These allow for such preferences reversals by incorporating, e. g., agents’ sensitivity towards uncertainty or visceral impulses (Laibson 1997, Dasgupta & Maskin 2005). Philosophers seem well advised to consider those models when drawing on empirical evidence in their work.

3. Conclusion

Delay discounting theory offers a powerful model for weak-willed behaviour: it can describe and predict how an agent’s preferences change over time and at what point she will reverse them. They are thus well suited to account for many prime examples for weakness of the will. Still, they are so far not able to account for all of them. For one thing, the approach might find it difficult to describe instantaneous cases of seemingly weak-willed behaviour. Future research may be able to address this and other issues.

Notes

[1] I use the two interchangeably. [2] We set aside special cases like indifference.

References

Ainslie, G. (2001). Breakdown of will, Cambridge University Press, Cambridge.

Aristotle (n.d.). Nicomachean ethics, ed. I. Bywater (1894), Oxford University Press, Oxford.

Becker, G. (1976). The economic approach to human behavior, University of Chicago press.

Dasgupta, P. & Maskin, E. (2005). Uncertainty and hyperbolic discounting, The American Economic Review 95(4): 1290–9.

Kirby, K., Petry, N. & Bickel, W. (1999). Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls, Journal of Experimental Psychology 128(1): 78–87.

Laibson, D. (1997). Golden eggs and hyperbolic discounting, Quarterly Journal of Economics 112(2): 443–77.

Mele, A. (1987). Irrationality, Oxford University Press, New York.

Mele, A. (2012). Backsliding, Oxford University Press, Oxford.

Mischel, W. & Ebbesen, E. (1970). Attention in delay of gratification, Journal of Personality and Social Psychology 16(2): 329–37.

Mischel, W., Shoda, Y. & Rodriguez, M. (1989). Delay of gratification in children, Science 244(4907): 933–8.

Samuelson, P. (1937). A note on measurement of utility, The Review of Economic Studies 4(2): 155–61.

Steele, K. & Stef ́ansson, O. (2016). Decision theory, in E. N. Zalta (ed.), The Stanford encyclopedia of philosophy, winter 2016 edn, Metaphysics Research Lab, Stanford University.

von Neumann, J. & Morgenstern, O. (1953 [1944]). Theory of games and economic behaviour, 3 edn, Princeton University Press, Princeton.

Zheng, Y. (2001). Akrasia, picoeconomics, and a rational reconstruction of judgment formation in dynamic choice, Philosophical Studies 104(3): 227–51.


Neil Levy:

Nessa takes herself to judge that she ought to work on the paper she promised for a volume tonight, but she watches Netflix instead. She acted intentionally and her behavior was reasons-responsive. If someone had paid her to work on the paper, or if the viewing options had been worse she would have acted in accordance with her judgment. In the past, when Nessa has faced conflicts very like this one, she has sometimes acted in accordance with her judgment and sometimes she has given in to the temptation to do something more immediately rewarding. She experiences her behavior as voluntary. But why would an agent voluntarily and intentionally act contrary to their own best judgment?

Philosophers often seem to assume, at least implicitly, that there is some one thing going on in cases like this. They are explained by a mismatch between the strength of Nessa’s desires and the motivational power of her judgments, or by a failure of her deliberative system to override her impulsive system, and whatever explains this case will equally explain her previous failures, as well as mine and yours. I suspect that’s a mistake (in part, but only in part, because I suspect that the deliberative system is a virtual system, implemented on the basis of non-deliberative mechanisms, and without motivational powers of its own). There are many things going on in cases we describe as involving weakness of the will and no unified account can capture them all. Nor should we think that there is any neat way of picking out which mechanism is at work in what case: I doubt introspection can reliably distinguish between cases.

Perhaps in some cases, agents like Nessa have desires that are out of line with the motivational strength of their judgments. Perhaps some are explained by a depletion of self-control resources (a story I have defended elsewhere (Levy 2011), but which I am no longer confident about, in the light of the failure of crucial experiments to replicate; Hagger, Chatzisarantis et al. 2016). In many, I suspect, the agent is simply wrong about what her own best judgment is. Our access to our mental states is patchy and often indirect, and I doubt that ‘best judgment’ has a proprietary phenomenology. Very often, we discover what our judgments are by seeing what we are disposed to say, and we may sometimes be disposed to report what we think we ought to do, in the light of certain considerations (moral or prudential, in particular), mistaking that for what we think we judge we ought, all things considered, to do.

I want to highlight a different mechanism for weakness of will here, however. This mechanism bears some resemblance to the mistake account just sketched, inasmuch as when the agent acts, she doesn’t act contrary to her own best judgment. However, when she articulated the judgment, she may indeed have reported her own best judgment. In the meantime, she has changed her mind. She has experienced what Richard Holton (2009) calls judgment shift.

The inspiration for the judgment shift account I want to advance here comes from the observation that the mid-brain dopaminergic system is dysregulated in drug addiction in a way that suggests a malfunction of its role as a prediction system. In a series of classic experiments Schultz and colleagues (Schultz et al. 1992; Schultz et al. 1997) demonstrated a spike in phasic dopamine in response not to reward (as sometimes thought) but to unexpected reward. The system adapts to reward delivery if it is expected. Thus, the spike in phasic dopamine occurs not in response to the reward itself, but to a signal that the reward is about to be delivered (assuming that signal is itself unexpected). Because drugs of addiction (including alcohol and tobacco) in one way or another increase the availability of dopamine, the very currency which the system uses in prediction, it cannot adapt to these rewards. A spike in phasic dopamine occurs in response both to a signal that a reward is available (cravings are very sensitive to cues of drug availability in addicts) and to the reward itself. The result is that the initial signal is registered as too small, relative to the actual reward value of the good, and the system attempts to adjust by increasing its magnitude. But no such adjustment will ever be enough, so long as the reward increases dopamine directly.

When the person encounters a cue predictive of drug availability, she therefore experiences a powerful signal that the world is better than expected. This signal constitutes what is called, in the predictive processing literature, surprisal, or prediction error (Levy 2014). The brain is an error minimization machine. There are always multiple ways of updating the model of the world, or acting on the world, to minimize error, but sometimes the most accessible path involves updating the judgment, such that the person shifts from judging that this is a world in which drugs are not to be consumed to judging that they ought to be consumed. She may have sincerely judged that she ought not to consume, but now she makes a different judgement: prudentially, but not all things considered, drugs are not to be consumed, or drugs ought not to be consumed all things considered except when….(the day has been especially trying; it would be rude to refuse….We are excellent confabulators).

Might this same mechanism be at work in ordinary cases of weakness of will; i.e. cases in which the prediction mechanism is working as designed? While the idea is speculative (doubly speculative, in fact: the empirical evidence in favor of the judgment shift account of addiction underdetermines the account), it is, I think, plausible. When agents with properly functioning prediction systems encounter cues of reward availability, these cues constitute a prediction error relative to a model on the world on which those particular rewards are not to be consumed (now). Because the system is functioning as designed, the signal will be weaker than in the case of the addict. That mean it is less likely to be passed up the processing hierarchy, less attention grabbing (attention is a mechanism for making errors more precise), more easily minimized by action, physical or mental. But a sufficiently large and sufficiently precise error must be minimized, and one way of minimizing it is model update: adopting a model of the world according to which the reward should be consumed (now). The prediction update story may thereby underwrite judgment shift.

How do we best prevent judgment shift, in ourselves and others? There is more than one way to do this. If we are strongly committed to a higher-order model that conflict with such shifts (say a conception of ourselves as continent), we are probably less likely to experience them. Of course, it is not trivial to get ourselves to be genuinely committed to such a model. It is a doxastic state, and committing to it probably requires generating a great deal of evidence that it is true: that is, actually resisting temptation. So there’s a chicken and egg problem here. An easier way to avoid judgment shift, and one that I expect is routinely utilized by ordinary people (with or without realizing that’s what they’re doing) is structuring one’s activities, or the environment in which one acts, so that cues that signal reward availability are not encountered when they’re unwanted (Levy 2017). This strategy, too, may not be easy to implement, inasmuch it requires a great deal of control over one’s environment and one’s activities. Others may attempt to wrest control from us, sometimes with the aim of ensuring we encounter cues that we might prefer to avoid. It’s not for nothing that adverts are placed in locations where they are hard to avoid or that supermarkets place the high margin and highly tempting candy bars near the check outs. They’re trying to induce judgment shift in us.

For most of us, self-control depends in important on some degree of control over our environment. Even for the continent – those with a self-model to which they assign a high probability, inconsistent with weakness of will – self-control may depend genetically on control over the environment: to possess such a self-model at least typically is going to depend on prior possession of evidence in its favor, and that evidence will be probably be gathered through successful control in a control-conducive environment. A control conducive environment, in turn, is likely to be one that is reasonably under our control. If anything like this is true, then self-control is very importantly a political issue. Who controls our environments? Who lacks the resources for such control, and instead finds themselves buffeted by external forces? By focusing only on internal mechanisms for control, and even more by blaming those who suffer from self-control failures, we turn ourselves into the ideological footsoldiers of oppression.

References.

Hagger, M. S., Chatzisarantis, N. L. D. et al. 2016. A Multilab Preregistered Replication Of The Ego-Depletion Effect. Perspectives on Psychological Science 11: 546–573.

Holton, R. 2009. Willing, Wanting, Waiting. Oxford: Oxford University Press.

Levy, N. 2011. Resisting Weakness of the Will. Philosophy and Phenomenological Research 82: 134-155.

Levy, N. 2014. Addiction as a Disorder of Belief. Biology & Philosophy, 29: 315-225.

Levy, N. 2017. Of Marshmallows and Moderation. In Walter Sinnott-Armstrong and Christian B. Miller (eds.) Moral Psychology, Volume 5: Virtue and Happiness. Cambridge: MIT Press.

Schultz W., Apicella P., Scarnati E., Ljungberg T. 1992. Neuronal activity in monkey ventral striatum related to the expectation of reward. Journal of Neuroscience 12: 4595-4610.

Schultz W., Dayan P., Montague P.R. 1997. A neural substrate of prediction and reward. Science 275: 1593-1599


Agnes Moors:

Towards a goal-directed account of weak-willed behavior

People often engage in behavior that is not in their best interest – so-called suboptimal or irrational behavior. Examples (of partially overlapping categories) are action slips (e.g., typing in one’s old password), costly or recalcitrant emotional behavior (e.g., costly aggression, avoidance in fear of flying), arational behavior (e.g., slamming the door out of anger), impulsive/compulsive behavior (e.g., costly aggression, addiction), and weak-willed or akratic behavior. The latter category comprises behaviors that people engage in despite the fact that they have a correct judgment that other behavior would be more optimal. People know smoking and drinking is bad for them, but they do it anyway. They know exercising is good for them, but they fail to get off the couch.  

To explain suboptimal behaviors, theorists have turned to dual process models (Heyes & Dickinson, 1990), in which behaviors can be produced either by (a) a stimulus-driven process in which a stimulus activates the association between the representation of stimulus features and the representation of a response  (S→[S-R]→R) or (b) a goal-directed process in which the values and expectancies of the outcomes of one or more behavior options are weighed before an action tendency is activated (S → [S:R-O → R]→R). Note that the term habit is used for stimulus-driven processes that have been installed via an overtrained operant conditioning procedure in which performance of the same response given a certain stimulus repeatedly led to the same outcome. This procedure is supposed to stamp in the S-R association while the outcome is no longer represented or activated. 

To diagnose whether a process is stimulus-driven or goal-directed, researchers typically conduct a devaluation test or a contingency degradation test (Hogarth, 2018). If devaluation of the outcome of a behavior or a degradation of the likelihood that the behavior will lead to the outcome subsequently reduces (/does not reduce) the behavior, it is inferred that the value and expectancy of the outcome of the behavior were represented (/not represented) and hence that the behavior was caused by a goal-directed (/stimulus-driven) process. 

Traditional dual-process models have a default-interventionist architecture, with the stimulus-driven process as the default and the goal-directed process as an occasional intervenor. This architecture is rooted in the idea of a trade-off between automaticity and optimality, which are both tied to the computational complexity of the processes. Stimulus-driven processes are seen as simple and therefore automatic but at the same time rigid (because they are insensitive to outcome devaluation and contingency degradation) and therefore more likely to produce suboptimal behavior. Goal-directed processes, on the other hand, are seen as complex and therefore nonautomatic but at the same time flexible (because they are sensitive to outcome devaluation and contingency degradation) and therefore more likely to produce optimal behavior. The automatic nature of the stimulus-driven process makes it the default process. However, because this process is more likely to lead to suboptimal behavior, it must sometimes be corrected by the goal-directed process. The problem is that this goal-directed process is seen as nonautomatic, which means that it can only intervene when there is enough opportunity, capacity, and/or motivation (Moors, 2016; Moors & De Houwer, 2006). When these factors are low, the organism has no choice but to switch from the goal-directed process to the stimulus-driven process. 

Empirical evidence for the default-interventionist model comes in the form of dissociations showing that when opportunity, capacity, and/or motivation are high, the goal-directed process determines behavior whereas when these factors are low (because of time pressure, stress, sleep deprivation etc.) the stimulus-driven process takes over (e.g., Schwabe & Wolf, 2009; but see below).  

According to the traditional model, people continue to smoke against their better judgment because their  behavior is caused by a stimulus-driven process (a habit) in which the sight of cigarettes directly activates the tendency to smoke, and the goal-directed process that induced the tendency to refrain from smoking (at the service of a health goal) was unable—“too weak”—to successfully intervene (Baumeister, 2017; Everitt, Dickinson, & Robbins, 2001; Tiffany, 1999; Wood & Rünger, 2016). 

Recently, I proposed an alternative dual process model (Moors, 2017a, b; Moors, Boddez, & De Houwer, 2017; Moors & Fischer, in press) with a parallel-competitive architecture, which is rooted in the idea that stimulus-driven and goal-directed processes can both be automatic (for arguments, see Moors et al., 2017). If both processes can be automatic there should be a substantial number of cases in which they operate in parallel and enter in competition with each other. The model moreover assumes that when both processes do enter in competition, the goal-directed process should win because goal-directed processes are automatic and optimal whereas stimulus-driven processes are only automatic and the system should prioritize the process with the most advantages. In this model, the goal-directed process is the default determinant of behavior and will determine the lion share of behavior whereas the stimulus-driven process determines behavior only in exceptional cases. 

In line with this view, evidence for stimulus-driven processing based on habit learning seems to be  weak. In animal outcome devaluation studies, for instance, stimulus-driven drug seeking behavior is confined to highly specific conditions such as a no-choice procedure (a single action leading to a single outcome: drugs), and it is fragile in that it is quickly taken over by a goal-directed process when the devalued outcome (which is left away in the test phase) is reintroduced (Hogarth, 2018). These conditions do not resemble those in human natural environments: We always have a choice between drugs and natural rewards, and we never get a break from the devalued outcome (e.g., hangover, guilty feelings). 

In humans, evidence for the role of stimulus-driven processing in drug seeking and other behavior is even weaker (Hogarth, 2018). A recent series of five attempts to find evidence for habit learning in humans failed (de Wit et al., 2018). Several prior studies that did report evidence for stimulus-driven processing used a task design (the “fabulous fruit game”; de wit, Niry, Wariyar, Aitken, & Dickinson, 2007) that turned out to be unsuitable for detecting stimulus-driven processing (De Houwer, Tanaka, Moors, & Tibboel, 2017). 

Evidence for goal-directed processing is abundant, not only as the determinant of optimal but also as the determinant of suboptimal behavior such as drug seeking (see reviews by Hogarth, 2018). Before citing some of this evidence, let me first explain how the alternative dual process model accounts for suboptimal behavior. To do this, I need to elaborate a bit more on the goal-directed process. 

The goal-directed process does not occur in isolation, but can be embedded in a cycle, starting with a comparison between a stimulus and a first goal (which is the representation of a valued outcome). If the stimulus and this first goal are discrepant, a second goal arises which is to reduce the discrepancy. This can be done either by acting to change the actual stimulus (i.e., assimilation), by changing the first goal (i.e., accommodation), or by changing interpretation of the stimulus (i.e., immunization), depending on which of these broad strategies has the highest expected utility. If the person chooses to act, the specific action option with the highest expected utility will activate its corresponding action tendency (which can be considered as a third goal). Once the action tendency is translated in an overt action, it produces an outcome, which is fed back as the input to a new cycle. The cycle is repeated until there is no discrepancy left. Note that all steps in the cycle can in principle occur outside of awareness. 

People have many goals, some of which may conflict with each other. In the alternative model, self-regulation conflicts are not understood as conflicts between a stimulus-driven and a goal-directed process, but as conflicts between two goal-directed processes. If a health goal does not manage to make a person quit smoking, there must be another goal that is either more valued and/or that has a higher expectancy of being reached that wins the competition. Examples of other goals are a hedonic goal, a social goal, the goal for autonomy, etc. (Baumeister, 2017; Kassel, Stroud, & Paronis, 2003). 

The multiple-goal argument has implications for the methods used to diagnose whether a behavior is caused by a stimulus-driven or goal-directed process. The upshot is that if a behavior is found to be insensitive to the devaluation of one outcome, it may still be driven by another outcome. If stress leads to eating beyond satiation, this may not indicate that eating was stimulus-driven (as argued by Schwabe & Wolf, 2009), but perhaps that eating is a strategy to reduce stress. Recent work has started to re-examine purported evidence of stimulus-driven processing by manipulating the fulfilment of other goals (see also Kopetz, Woerner, & Briskin, 2018). 

Critics may object that agents of weak-willed behavior typically do not attribute a higher value to their hedonic goal than to their health goal. And even if they do (but are unaware), this does present a puzzle. 

One part of the solution is to consider that for many substance users, the hedonic goal is not the goal to add extra positive sparkles to an already bearable existence, but rather the goal to reduce unbearable stress or negative affect. What good is it to strive for a long, healthy life, if you cannot even survive another day?

Another part of the solution lies in the fact that behaviors are not only chosen on the basis of the values of their outcomes, but also on the basis of the expectancies that they will lead to these outcomes. So even if a smoker does not attribute a higher value to her hedonic goal than to her health goal, she may still estimate that one smoke is more likely to produce pleasure now than that abstinence is likely to avoid bad health later. 

One may argue that behavior that is still at the service of some goal, does not qualify as truly suboptimal (because it contributes to goal satisfaction), but merely appears to be suboptimal. A smoker may be correct in estimating that one smoke is more likely to produce pleasure now than that abstinence is likely to avoid bad health later. Thus, the optimal decision would be to have another smoke, even if—paradoxically—an accumulation of such optimal decisions is likely to result in a suboptimal outcome in the end (Ainslie,  1938). There is room for debate of course whether optimality should only be considered in relation to “the end” or whether it is also optimal to satisfy short-term goals (Lemaire, 2016). 

The reason why many decisions appear suboptimal is that the goal that is driving the behavior is not always obvious or conflicts with societal norms. A smoker may not realize how intense the stress is that she tries to alleviate by smoking, or she may not be aware that smoking is partly an act of rebellion in a way to affirm her autonomy (against “nanny state” coercion, Le Grand & New, 2015). 

But goal-directed processes may also be invoked to explain trulysuboptimal behavior. Such behavior can be understood as the result of noise or sand in the wheels of the goal-directed cycle. Several things may go wrong in this cycle. 

First, a person may fail to notice a discrepancy between the stimulus and a goal and hence the need to take action, or she may fail to notice that a stimulus has different implications for different goals. However, this is typically not the place where things derail in the case of weak-willed behavior. 

Second, a person may choose a less than optimal behavior option because more optimal behavior options are simply lacking from her behavior repertoire. It is possible that people who smoke to reduce their stress have not yet considered other, less costly behavior options to reduce their stress, such as vaping or yoga. 

Third, given that expectancies and values are subjective, they may not correspond to objective likelihoods and values (Tversky & Kahneman, 1993). In many self-regulation conflicts, the choice is between one behavior option (e.g., smoking) that has a short-term, certain, positive outcome (e.g., hedonic pleasure) and another behavior option (e.g., abstinence) that has a long-term, uncertain, negative outcome (e.g., cancer). All else equal, short-term outcomes are seen as more likely (i.e., availability effect) and as more positive (i.e., temporal discounting effect) than long-term outcomes. Temporal discounting happens to be more pronounced in smokers, although it is unclear whether this is a predisposing factor or a defensive consequence of smoking (Baumeister, 2017). Likewise, certain effects are seen as more likely than uncertain effects (of course), but they are also more heavily weighted (i.e., certainty effect). 

In addition to these content-less biases, smokers’ expectancies about whether smoking will lead to specific other outcomes, such as hedonic outcomes (in the form of stress reduction or the absence of withdrawal symptoms), may also be more or less accurate. There is no simple answer to the question whether smokers’ belief in the stress-reducing powers of smoking is accurate (e.g., Cook, Baker, Beckham, & McFall, 2017). There is evidence that smokers do overestimate the intensity of withdrawal symptoms, and this may encourage them to give in sooner rather than later. “If the end point will be the same, why suffer first?” (Baumeister, 2017, p. 81). 

Note that the theoretical rationality of biases and false beliefs does not need to match their practical rationality: Some in/accurate beliefs may promote/hinder goal satisfaction. For instance, optimistic illusions have been associated with increased well-being (although as always, the picture is mixed, e.g., Bortolotti & Antrobus, 2015). 

Finally, one may wonder whether it makes sense to talk about the objective value of a goal/outcome. At first sight, values are always values for a person, and so it seems that values can only be subjective.  On second thought, however, the value of any lower-order goal depends on the expectancy that it will satisfy a valued higher-order goal, and this expectancy could be more or less accurate. A person may have the goal to become rich as a strategy to achieve happiness, but this strategy may turn out to be ineffective (Ryan & Deci, 2001). Applied to the case of smoking against better judgment, a person may smoke to satisfy the goal for hedonic pleasure, but the goal for hedonic pleasure (or prioritizing hedonic pleasure over health) may turn out the be an ineffective strategy to achieve happiness.  

In sum, some cases of weak-willed behavior more properly may be categorized as strong-willed because they were driven by more valuable or more easily achievable goals, that were not always obvious to the agent, and therefore appeared weak-willed. Other cases of weak-willed behavior are best understood as stemming from errors in the evaluations of values or expectancies, but here too, the term weak-willed does not cut any ice. 

References

Ainslie, G. (2001). Breakdown of will. New York: Cambridge University Press.

Bortolotti, L., & Antrobus, M. (2015). Costs and benefits of realism and optimism. Current opinion in Psychiatry28(2), 194.

Cook, J. W., Baker, T. B., Beckham, J. C., & McFall, M. (2017). Smoking-induced affect modulation in nonwithdrawn smokers with posttraumatic stress disorder, depression, and in those with no psychiatric disorder. Journal of Abnormal Psychology126(2), 184.

De Houwer, J., Tanaka, A., Moors, A., & Tibboel, H. (2018). Kicking the habit: Why evidence for habits in humans might be overestimated. Motivation Science4, 50-59 

de Wit, S., Kindt, M., Knot, S. L., Verhoeven, A. A., Robbins, T. W., Gasull-Camos, J., … & Gillan, C. M. (2018). Shifting the balance between goals and habits: Five failures in experimental habit induction. Journal of Experimental Psychology: General147(7), 1043-1065.

de Wit, S., Niry, D., Wariyar, R., Aitken, M. R. F., & Dickinson, A. (2007). Stimulus-outcome interactions during instrumental discrimination learning by rats and humans. Journal of Experimental Psychology: Animal Behavior Processes, 33, 1–11. 

Everitt, B. J., Dickinson, A., & Robbins, T. W. (2001). The neuropsychological basis of addictive behaviour. Brain Research Reviews36(2-3), 129-138.

Hogarth, L. (2018). A critical review of habit theory of drug dependence. In B. Verplanken (Ed.), The psychology of habit (pp. 325-341). Springer, Cham.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty5(4), 297-323.

Kopetz, C. E., Woerner, J. I., & Briskin, J. L. (2018). Another look at impulsivity: Could impulsive behavior be strategic?. Social and Personality Psychology Compass12(5), e12385.

Le Grand, J., & New, B. (2015). Government paternalism: Nanny state or helpful friend?. Princeton University Press.

Lemaire, S. (2016). A stringent but critical actualist subjectivism about well-being. Les ateliers de l’éthique11(2-3), 133–150. 

Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and conceptual analysis. Psychological Bulletin132, 297-326. 

Moors, A. (2016). Automaticity: Componential, causal, and mechanistic explanations. Annual Review of Psychology67, 263-287. 

Moors, A. (2017a). Integration of two skeptical emotion theories: Dimensional appraisal theory and Russell’s psychological construction theory. Psychological Inquiry28, 1-19.

Moors, A. (2017b). The integrated theory of emotional behavior follows a radically goal-directed approach. Psychological Inquiry28, 68-75. 

Moors, A., Boddez, Y., & De Houwer, J. (2017). The power of goal-directed processes in the causation of emotional and other actions. Emotion Review9, 310-318.

Moors, A. & Fischer, M. (in press). Demystifying the role of emotion in behavior: Toward a goal-directed account. Cognition & Emotion

Ryan, R. M., & Deci, E. L. (2001). On happiness and human potentials: A review of research on hedonic and eudaimonic well-being. Annual review of psychology52(1), 141-166.

Schwabe, L., & Wolf, O. T. (2009). Stress prompts habit behavior in humans. Journal of Neuroscience29(22), 7191-7198.

Tiffany, S. T. (1999). Cognitive concepts of craving. Alcohol Research & Health, 23, 215–224.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty5(4), 297-323.

Wood, W., & Rünger, D. (2016). Psychology of habit. Annual Review of Psychology, 67, 289 –314.


Chandra Sripada:

It Is Hard To Explain Weakness Of Will If You Reject The Faculty Of Will

A good place to start in thinking about weakness of will is “Davidson’s theater”, which appears in section two of “How Is Weakness of the Will Possible”. There we find two contrasting views of mind. In one view, ascribed to Aristotle, Aquinas, and Hare, there are two actors on the stage, Reason and Passion, and in times of temptation, they duke it out. In the other view, ascribed to Plato and Butler, a third actor appears, the Will, and it is up to him “to decide who wins the battle. If the Will is strong, he gives the palm to reason; if he is weak, he may allow pleasure or passion the upper hand.”

In contemporary philosophy, the “Trio” view, i.e., the one that endorses a robust faculty of Will, has receded and now occupies the margins. This is unfortunate because it has two major advantages. First, it is the only view that has the resources to fully capture the phenomenon of weakness of will. Second, the Trio view is true—as a matter of empirical fact, we do have a robust faculty of Will as a central part of our psychology. In the rest of this post, I’ll briefly elaborate on this pair of points.

1. Capturing Weakness of Will Requires a Doubly Independent Will

Start with the claim that only a Will-based psychology can fully capture weakness of will. To see this point more clearly, I need to fill in some features of how the Will relates to the other two parts of the mind (Switching away from Davidson’s terminology, I refer to these parts as Judgment and Appetite.)

On my version of the Trio view, the faculty of Will needs to be doubly independent. First, it needs to be independent of Judgment. It will listen to Judgment and typically follow it, but importantly it needn’t. This is the decisional aspect of the Will. Second, the Will needs to be independent of Appetite. This means the Will can produce decisions about what to do that diverge from what one’s appetites push one to do. But, of course, it doesn’t stop there: We don’t form a decision contrary to our appetites and then simply hope and pray that our appetites will fall into line, like a fan of Manchester United hoping their team will score. We actively and effortfully do things—specifically we perform attentional and inhibitional mental actions—that block or otherwise modulate our appetites. This is the regulative aspect of the Will.

Suppose we have a Will on the stage that is doubly independent from the other two actors in the preceding ways. Then new plot options open up. Consider this sequence, with which we are all, perhaps unfortunately, intimately familiar: Appetite pushes us towards one course of action. Judgment recommends another. The Will, in its regulative role, can control Appetite, but it doesn’t. Instead, in its decisional role, it “gives the palm” to Appetite. Notice in this scenario, the Will is not overrun by Appetite, which is what happens in compulsion. Rather, the person does what Appetite says even when Judgment opposes because the Will—being weak—decides to let Appetite win. When all of the preceding features are in place, we have a paradigm case of weakness of will, and the Trio model is needed to get all these features in place.

2. The Cognitive Control Research Program Provides an Empirical Vindication of the Faculty of Will

A second big advantage of the Trio view is that it is true. What I mean is that there is strong evidence that we do in fact have a faculty that is doubly independent in just the way the Will is supposed to be. In contemporary cognitive science, this faculty goes by the name cognitive control and it is examined in thousands of studies in neuroscience, psychology, and psychiatry.

Much research into cognitive control consists of careful examination of “conflict tasks”, such as the Stroop task, Go/No Go task, and others. These tasks involve performing certain distinctive intentional mental actions (what I call “control actions”) in order to regulate a variety of spontaneous mental states, including: actions that arise habitually, attention that is grabbed by stimuli, memory items that are automatically retrieved, and thought contents that spontaneously pop into mind. Elsewhere, I fill in a key link: I give a comprehensive account of how self-control directed at complex states such as appetitive desires is related to cognitive control (I argue self-control consists in performing extended skilled sequences of cognitive control; see here).

So much for the regulative aspect of cognitive control. Turn now to the decisional aspect, which in recent years has really taken off as a focus of research. There is a growing consensus that subpersonal cost/benefit calculation plays a central role in decisions to exercise cognitive control (which I call “executive decisions”). One version of the view, called Expected Value of Control (EVC) theory, proposes that there are a set of cognitive routines—sometimes modeled in terms of temporal difference reinforcement learning—that continuously estimate the expected value of exercising cognitive control relative to its expected costs (see Shenav, Botvinick, and Cohen 2016). Importantly, the idea is not that the person consciously and intentionally sets out to figure out the expected value of control, but rather that these sophisticated calculations occur non-deliberatively “under the hood” and are the basis for one’s executive decisions.

EVC theory is important for an account of weakness of will because it helps us see how one’s practical judgments can come apart from one’s decisions. The overall picture is that our minds house at least two quite different ways of rationally aggregating disparate bits of information relevant to the question of what to do. There is a conscious serial way of aggregation that leads to practical judgment. There is also a way that involves EVC calculation, where the underlying calculations occur outside awareness, which leads to executive decisions. I claim that this picture delivers a moderate form of externalism: Executive decisions are tied to practical judgments (because the latter typically serve as informational inputs to the former). But the two are ultimately rooted in different aggregation routines, and they thus can diverge. When they do, the agent acts in weak-willed way.

3. Conclusion – Why Philosophy Needs to Resurrect the Will

Looking across moral psychology more generally, it is not just accounts of weakness of will that have suffered due to the abandonment of the faculty of Will. Without a Will, it is hard to make sense of strength of will, i.e., our ability to perform intentional actions to defeat our strongest desires (indeed, some philosophers claim that doing this is impossible!). It is also challenging to explain freedom of will, the freedom a person has to decide what to do irrespective of what their desires dictate. The same applies to doxastic will, our ability to intentionally regulate the formation of belief (if we have this kind of ability, which I think we do to a certain extent, then a moderate form of doxastic voluntarism follows). The faculty of Will, it seems, is absolutely everywhere in the kinds of problems that most interest philosophers. We might make more progress if we heeded the empirical evidence and gave the faculty of Will a central place in our theorizing.


Zina B. Ward:

Many recent attempts to naturalize weakness of will have pursued what we might call “the partitioning approach”: using distinct mental systems to characterize the phenomenon and explain what’s going on in weak-willed subjects. Although the approach has been around since antiquity, what’s different about recent partitioning accounts is that they rely on mental divisions that are empirically grounded (Levy 2011; Sripada 2010, 2014; Haas 2018). These “new partitioners” draw on research in psychology and the decision sciences to characterize and defend the mental systems they appeal to –– a great improvement over the ad hoc partitioning of Davidson (1982). Even so, I have a few reservations about the partitioning approach, which I’d like to raise here in the hope of sparking a discussion about its strengths and limitations. 

But first, the background: Levy (2011) and Sripada (2014) both offer dual-process accounts of weakness of will. Sripada distinguishes between an emotional motivational system, which produces emotional action-desires, and a deliberative motivational system, which produces practical desires. One’s action is akratic when one’s emotional action-desires and practical desires compete for control of action and the former win out. [1] Sripada situates his account within the dual-process framework, explaining that emotional action-desires are part of System 1 (S1) and practical desires part of System 2 (S2). Levy’s (2011) account is similar, although he understands weakness of will as the unreasonable revision of intentions rather than akrasia (Holton 1999). According to Levy, weakness of will is the result of ego depletion, or the “depletion of an energy source preferentially drawn on by self-control mechanisms” (Levy 2011, 136). Ego depletion causes the weak-willed subject to switch from S2 to S1. Haas (2018) builds her account of weakness of will around the “Multi-System Model of the Mind” (MSM) developed in reinforcement learning and neuroeconomics, which claims there are at least three decision systems that affect behavior: the deliberative system, the hardwired system, and the habitual system. Haas suggests that akrasiaoccurs when the hardwired or habitual systems are allocated for action selection instead of the deliberative system.

The partitioning approach to weakness of will is only as solid as the partitions it relies on. This is where my worries start. Dual-process theory has been criticized in recent years as a “convenient and seductive myth” (Melnikoff & Bargh 2018; see also Osman 2004, Keren & Schul 2009). First, it is subject to what Melnikoff & Bargh dub “the alignment problem”: different features associated with Type 1 and Type 2 processing are not aligned in the way dual-process theories claim. It is not the case, for example, that processing that is automatic is always intuitive, fast, unconscious, and efficient. There is far more mix-and-matching of the features associated with T1 and T2 processing than dual-process theories predict. Moreover, many of those features come in degrees. Processing can be faster or slower, more or less efficient, and so on. Although dichotomizing continuous properties can be scientifically useful, it also leads to a loss of information and raises the worry that the dividing line is arbitrary. Dual-process theories’ dichotomizations seem especially inappropriate given that there are different “subdimensions” of individual processing features (Melnikoff & Bargh 2018). For example, there are at least two senses in which a process can be controllable: it can be stopped after being triggered, or its output can be altered. These types of controllability dissociate. A process may be controllable in the first sense but not the second, or vice versa. This suggests that one cannot use controllability (tout court) to characterize T2 processes. 

For these reasons, I doubt that the dual-process framework provides a good foundation for naturalistic accounts of weakness of will. Dual-process theories are a product of our tendency to construct binary oppositions (Newell 1973) and our commitment to a reason-versus-passion dichotomy. They don’t reflect our true psychological architecture. There is also now reason to be skeptical of ego depletion, on which Levy’s (2011) account is based. Ego depletion has been caught up in the replication crisis, with several recent attempts at preregistered replication failing to find the effect (Hagger et al. 2016, Carruth et al. 2018 [preprint]). 

Even if the mental divisions underlying partitioning accounts of weakness of will are valid, however, a further question arises: do the accounts accurately describe cases of weakness of will? 

The new partitioners suggest that weakness of will occurs when the system responsible for reasoning and judgment is overridden by some other system (for Sripada and Levy, when S1 wins out over S2; for Haas, “any case in which… the hardwired or habitual systems [are allocated] for action selection” [Haas 2018, 15]). These characterizations of weakness of will strike me as too broad. There are situations in which S2 or the deliberative system does not control behavior but there is no weakness of will. For example, imagine that I intend to drive to a friend’s birthday party but find myself inadvertently heading to work when I leave my house because I’m driving on autopilot. My S1 or habitual system is controlling my behavior. But I am being absent-minded, not weak-willed. This shows, I think, that partitioning accounts don’t provide an accurate naturalistic characterization of weakness of will (even if they do explain what’s happening in weak-willed subjects).

Descriptions of weakness of will that rely on mental partitions also have misleading implications about how weakness can be avoided. They seem to suggest that the way to overcome akrasia or the revision of one’s resolutions is to bolster the system responsible for reasoning and judgment: to ensure that S2 or the deliberative system is in control of action. In fact, there is empirical evidence that recruiting non-deliberative capacities is one of the best ways to ensure follow-through on one’s resolutions and action in accordance with one’s judgments. This prevents second thoughts and rationalizations from getting in the way (Holton 2009). If you want to eat vegetarian, for example, you shouldn’t deliberate intensely about all the options when you go out to eat; you should try to limit your deliberation by looking only at the vegetarian dishes on the menu. Gollwitzer & Bargh (2005) give many such examples of “automaticity in goal pursuit,” showing that automatic motivations and implementation intentions can help people achieve their goals. The idea that weakness of will can often be avoided by minimizing the use of one’s reasoning capacities is an insight not naturally accommodated by partitioning accounts.

There is one last potential problem with the partitioning approach that I want to raise for discussion, but not endorse. Philosophers find weakness of will to be “puzzling, defective, or dubiously intelligible” (Stroud 2014). Some have suggested that any account of weakness of will must take care not to deny its irrational character. Let’s call this the “Irrationality Constraint” (IC): an account of weakness of will should not render it either rational or arational (Henden 2004). It’s plausible that partitioning accounts violate IC. By characterizing weakness of will as the product of causal interactions between separate mental systems, they threaten to dissolve the puzzle of weakness of will completely. This is a consequence that Haas seems to embrace, arguing that her account shows that weakness of will is “not a breakdown in the system…It is a simple byproduct of everyday decision making” (Haas 2018, 17).This leads to a deeper question about the status of IC within a naturalistic framework: Is it possible for naturalists to preserve the “puzzle” of weakness of will? And should we even try? These questions remain open, in my view, for partitioners and non-partitioners alike.

Notes

[1] N.B. Sripada (2010, 2014) is interested in willpower and self-control, and only secondarily concerned with weakness of will as the failure of those capacities.

References

Carruth, Nicholas, Jairo Ramos, and Akira Miyake. 2018. “Does Willpower Mindset Really Moderate the Ego-Depletion Effect? A Preregistered Direct Replication of Job, Dweck, and Walton’s (2010) Study 1,” posted October 26. https://psyarxiv.com/8cqpk/. 

Davidson, Donald. 1982. “Paradoxes of Irrationality.” In Problems of Rationality, 169–88. Oxford: Oxford University Press. 

Gollwitzer, Peter M., and John A. Bargh. 2005. “Automaticity in Goal Pursuit.” In Handbook of Competence and Motivation, 624–46. New York: Guilford Press.

Haas, Julia. 2018. “An Empirical Solution to the Puzzle of Weakness of Will.” Synthese, 1–21. 

Hagger, Martin S., Nikos L. D. Chatzisarantis, Hugo Alberts, Calvin Octavianus Anggono, Cédric Batailler, Angela R. Birt, Ralf Brand, et al. 2016. “A Multilab Preregistered Replication of the Ego-Depletion Effect.” Perspectives on Psychological Science: A Journal of the Association for Psychological Science11 (4): 546–73. 

Henden, Edmund. 2004. “Weakness of Will and Divisions of the Mind.” European Journal of Philosophy12 (2): 199–213.

Holton, Richard. 1999. “Intention and Weakness of Will.” The Journal of Philosophy96 (5): 241–62. 

———. 2009. Willing, Wanting, Waiting. Oxford: Oxford University Press.

Keren, Gideon, and Yaacov Schul. 2009. “Two Is Not Always Better Than One: A Critical Evaluation of Two-System Theories.” Perspectives on Psychological Science4 (6): 533–50. 

Levy, Neil. 2011. “Resisting ‘Weakness of the Will.’” Philosophy and Phenomenological Research82 (1): 134–155.

Melnikoff, David E., and John A. Bargh. 2018. “The Mythical Number Two.” Trends in Cognitive Sciences22 (4): 280–93. 

Newell, Allen. 1973. “You Can’t Play 20 Questions with Nature and Win: Projective Comments on the Papers of This Symposium.” In Visual Information Processing. New York, NY: Academic Press.

Osman, Magda. 2004. “An Evaluation of Dual-Process Theories of Reasoning.” Psychonomic Bulletin & Review11 (6): 988–1010. 

Sripada, Chandra Sekhar. 2010. “Philosophical Questions About the Nature of Willpower.” Philosophy Compass5 (9): 793–805.



Viewing all articles
Browse latest Browse all 77

Trending Articles