An Addiction Science Network Resource


Reprinted from R.A. Wise (1987), Intravenous drug self-administration: A special case of positive reinforcement. In M.A. Bozarth (Ed.), Methods of assessing the reinforcing properties of abused drugs (117-141). New York: Springer-Verlag.
 
Brain Reward System
Back to book TOC
(Search book contents) (Search entire ASNet)
 

Chapter 6

Intravenous Drug Self-Administration:
A Special Case of Positive Reinforcement
 

Roy A. Wise

Center for Studies in Behavioral Neurobiology
Department of Psychology
Concordia University
Montreal, Quebec, Canada H3G 1M8


Abstract
Much has been made of parallels between drug reinforcement and food reinforcement. In several important ways, however, the two differ. Unlike food reinforcement, drug reinforcement has rapid and direct effects in the central nervous system. Where two classes of response—operant and consummatory—are required for food reinforcement, only one—having some of the properties of an operant and some of the properties of a consummatory response—is required for drug reinforcement. Where satiety is delayed by many minutes after food reward, it is immediate with drug reinforcement. These differences must be taken into account when interpreting drug reinforcement studies; in particular, they have important implications for interpreting changes in response and hourly drug intake.

 

Introduction

The field of behavioral pharmacology has focused attention over the last two decades on parallels between the behavior supported by drug reinforcement and that supported by more natural reinforcers (Griffiths, Brady, & Bradford, 1979; Johanson, 1978; Kelleher & Goldberg, 1975; Schuster & Johanson, 1981; Schuster & Thompson, 1969). Intravenous drug reinforcement can establish lever-pressing habits similar to the lever pressing established by food pellets, the key-pecking established by kernels of grain, and the coin insertion and lever-pulling established by gambling and arcade devices.

In general, each of the well known characteristics of various schedules of food reinforcement (Ferster & Skinner, 1957) can also be demonstrated with the intravenous drug self-administration paradigm (Johanson, 1978; Spealman & Goldberg, 1978). Habit acquisition is most rapid when one injection is given for each response (FR-1); habit extinction is most protracted when training involves one injection for a varied number of multiple responses. Responding is most regular when reinforcement is given on variable interval (VI) or variable ratio (VR) schedules; response rate varies predictably when reinforcement is given on fixed interval (FI) or fixed ratio (FR) schedules. Important advances in our thinking about drug abuse and in our understanding of its underlying mechanisms have come from exploring these and other parallels between drug reinforcement and food reinforcement.

Just as the early years of drug reinforcement research have involved extensive exploration of the parallels between drugs and "natural" reinforcers, so did much of the work in physiological psychology explore parallels between brain stimulation reinforcement and natural reinforcers. In this case, however, the first impression was that brain stimulation reinforcement was anomalous. Much was made at first of the differences between behavior supported by brain stimulation reinforcement and that supported by food reinforcement. In the case of brain stimulation, partial reinforcement had very weak effects; animals stopped working under partial reinforcement conditions at much higher reinforcement densities than are required to sustain robust lever-pressing for food or water (Deutsch, 1963; Seward, Uyeda, & Olds, 1959). Unlike habits learned for food reinforcement, habits established under partial reinforcement with brain stimulation extinguished more rapidly than did habits established under continuous reinforcement (Sidman, Brady, Conrad, & Schulman, 1955). Massed practice was better than spaced practice in establishing reliable alley running for brain stimulation (Seward, Uyeda, & Olds, 1960). For a time it appeared that brain stimulation did not obey the laws of reinforcement as derived from food reward studies.

Closer analysis revealed that these anomalies were due to fairly obvious differences between brain stimulation reinforcement and food reinforcement paradigms. When steps were taken to make the two paradigms more comparable, the behaviors supported by the two classes of reinforcer became more comparable as well (Trowill, Panksepp, & Gandelman, 1969). One feature that seems to distinguish food reinforcement is that the animal makes two types of response for it—instrumental or operant responses and consummatory or respondent acts. Skinner’s (1938) behaviorism draws attention to the operant response, but it turns out (as was appreciated by Skinner, 1935) that the consummatory response is important as well. With brain stimulation, the animal doesn’t have to eat the reinforcer but rather has only to earn it (Gibson, Reid, Sakai, & Porter, 1965). When animals are required to earn stimulation by lever-pressing but are further required to "consume" it by licking a dipper to close a circuit which causes current delivery, partial reinforcement and spaced practice become more effective, as is the case with food reinforcement (Gibson et al., 1965).

One of the important consequences of the consummatory response is that it causes a delay between the performance of the operant response and the receipt of its consequence—the reinforcement. When brain stimulation is delayed, the efficacy of partial reinforcement is improved and self-stimulation more closely resembles lever-pressing for food (Gibson et al., 1965; McIntyre & Wright, 1965; Pliskoff, Wright, & Hawkins, 1965). Conversely, when the delay of food reinforcement is decreased (by injecting it directly on the tongue), the efficacy of partial reinforcement is decreased and lever-pressing for food more closely resembles self-stimulation (Gibson et al., 1965).

There are other aspects of brain stimulation reinforcement that contribute further to differences between lever-pressing for food and lever-pressing for stimulation. There is seemingly no deprivation effect in the case of brain stimulation reinforcement; rate of response for brain stimulation reinforcement does not vary as a function of how many hours it has been since the last period of stimulation (Olds, 1956). There is no "hunger" for brain stimulation reinforcement in this sense. Nonetheless, lever-pressing for brain stimulation is potentiated by food deprivation (Hodos & Valenstein, 1960; Hoebel & Teitelbaum, 1962; Olds, 1958a) which is present in most studies of food reinforcement. When animals are reinforced with palatable food under conditions of low hunger, rapid extinction is seen just as is the case with brain stimulation reinforcement (Panksepp & Trowill, 1967); thus it may be that conditions of drive as well as conditions of reinforcement account at least for some of the "anomalies" in brain stimulation reinforcement studies.

A final difference seems unavoidable and it, too, contributes to differences between responding for brain stimulation and responding for food; there is little evidence for satiation in the case of brain stimulation reinforcement (Olds, 1958b). This fact seems unavoidable. While physiological psychologists have found ways to model the effects of hunger with focal brain stimulation (Mendelson, 1966; Wise, 1974), they have, as yet, found no way to mimic satiety such that lever-pressing for brain stimulation would undergo extinction from its own consequences, as is the case with food reinforcement (Morgan, 1974).

Thus experimental and physiological psychologists have arrived at an insight that seems obvious on reflection: There are both similarities and differences between brain stimulation reinforcement and other reinforcers, and there are important lessons to be learned from each. Just as there are both similarities and differences between brain stimulation reinforcement and food reinforcement, so are there both similarities and differences between drug reinforcement and food reinforcement. However, in the case of drug reinforcement, it is the similarities that have received early attention. The present chapter turns to consideration of the differences. In many ways drug reinforcement is more similar to brain stimulation reinforcement than to food reinforcement, and detailed consideration of response chaining, of central delivery, of delay of reinforcement, and of hunger and satiety may prove to be as important for the drug specialist as for the brain stimulation specialist.

Special Features of Intravenous Drug Reinforcement

Intravenous drug reinforcement shares with brain stimulation reinforcement the facts that it is delivered without a traditional consummatory response, that it interfaces centrally rather than peripherally with the neural mechanism of reinforcement, that it can be varied with quantitative but not qualitative precision, that its central effects are felt with very little delay after the instrumental response, and that, at least in many cases, response rate is not predictable from hours of deprivation. Unlike brain stimulation reinforcement, drug reinforcement does produce satiation; the satiation provided by intravenous drug reinforcers still differs in important ways, however, from the satiation produced by food and water reinforcement. Thus there are several special features that distinguish intravenous drug reinforcement from food or water reinforcement, and each merits careful consideration.

Instrumental-Consummatory Response Chaining

Rats working for water reinforcement typically press a lever to activate a dipper and then lick the dipper to obtain their reinforcement. With food reinforcement the consummatory response is chewing and swallowing. The first response or response series—the "earning" of the reinforcement—is the instrumental or operant response; the ingesting of the reinforcement is the consummatory response. In the case of food and water, the consummatory response is the consumption of the reinforcement, and while these words have a similar root, consummatory in this context is used to designate consummation, not consumption (Woodworth, 1918, p. 40). Intromission and pup retrieval are consummatory responses just like licking, chewing and swallowing. The defining property of a consummatory response is that it terminates or "consummates" a series of instrumental acts. In a chain of goal-directed responses, the consummatory response is the final act in the chain—the one which constitutes achieving the goal. Consummatory acts are usually biologically primitive and species-typical acts; the final acts in an ethologist’s fixed action patterns (Moltz, 1965) are good examples.

In the case of lever-pressing for brain stimulation reinforcement or drug reinforcement, the distinction between consummatory and instrumental responses is blurred. The response that earns the reinforcement—usually lever-pressing—is the only response that is required of the animal. In this sense lever-pressing is a consummatory response; it ends the sequence of locomotion, of postural adjustment and of limb or head movement that results in reinforcement and produces, for a time, what has been called a "satisfying state of affairs" (Thorndike, 1911). The lever-pressing response, in the case of brain stimulation reinforcement, shares with biting and swallowing, in the case of food reinforcement, the fact that it consummates a sequence of instrumental acts. On the other hand, the lever-pressing response differs in a number of ways from chewing and swallowing and is, by the definition of the operant psychologist (Skinner, 1935), the same in its essential features to the lever-pressing instrumental response in the food reinforcement situation. Can lever-pressing be considered a consummatory response in the drug reinforcement paradigm and an instrumental response in the food reinforcement paradigm?

There are arguments for rejecting the notion that lever-pressing for drug can be considered an ordinary case of a consummatory act. It lacks the biologically primitive quality of traditional consummatory acts. It is not species-typical and it does not promote individual or species survival. These are not defining criteria of a consummatory response (Woodworth, 1918), but they should nonetheless be considered as having some significance in the analysis of motivated behavior (Glickman & Schiff, 1967). There may be something very special about the neural mechanisms of stereotyped response patterns which have a substantial genetic or early learning component to their topology. One reason for suspecting a fundamental difference between the neural organization of instrumental and consummatory behaviors is that, while neuroleptic drugs disrupt both types of response, the consummatory responses are much more resistant to this disruption than are the instrumental ones (Tombaugh, Tombaugh, & Anisman, 1979; Wise, 1982).

If good parallels are to be expected between lever-pressing for drug and lever-pressing for food, perhaps insights should be gained from physiological psychologists who required their animals to earn brain stimulation by lever-pressing and then required them to "ingest" the stimulation by licking a dipper (Gibson et al., 1965) or by pressing a second lever (Hawkins & Pliskoff, 1964). This chaining of lever-pressing to a second response would most closely approximate the food reinforcement paradigm, making lever-pressing an unambiguous case of an instrumental response.

Delay of Reinforcement

One of the reasons to consider adding a second response to follow lever-pressing in the drug reinforcement paradigm is that such an addition would increase the delay of reinforcement. It is well known that the effectiveness of reinforcement depends on the degree of delay after the instrumental response (Holder, Marx, Holder, & Collier, 1957; Logan, 1952; Peterson, 1956). Immediate positive reinforcers seem to exert stronger control over behavior, at least until the frustration of extinction conditions or partial reinforcement conditions are encountered (Crum, Brown, & Bitterman, 1951; Marx, McCoy, & Tombaugh, 1965). Delay of punishment, on the other hand, seems at least subjectively to increase its impact. Temporal factors are clearly important in behavioral control. In the case of brain stimulation reinforcement, it appears necessary to insert a delay between lever-pressing and the delivery of stimulation if the resulting behavior is to behave like that typical of food and water reinforcement (Gibson et al., 1965; Pliskoff et al., 1965).

In the case of brain stimulation reinforcement, a delay may be particularly important since the switch closure produced by lever-pressing causes almost immediate delivery of stimulation to the brain. Since stimulation is "delivered" centrally, presumably directly in the forebrain circuitry of the reinforcement mechanism (Olds, 1958b), the full benefit of the reinforcement is likely to be felt within milliseconds. The only tangible delay between lever-pressing and the impact of reinforcement should be the time it takes the nerve impulse to travel from the electrode tip to the critical site of the reinforcing event in the brain—presumably a trip of a few millimeters and at most a few synapses (Wise & Bozarth, 1984).

By contrast, food reinforcement is likely to have its significant central effects after a much longer delay, even when delivered directly to the tongue. The food must be dissolved in saliva and perhaps repositioned in the mouth before it reaches the relevant taste buds; it must then depolarize the receptors and generate a nerve impulse; the impulse must then travel several millimeters and cross perhaps several synapses before information of the reinforcing event reaches the central structures directly activated in the case of brain stimulation reinforcement. If food reinforcement and brain stimulation reinforcement do ultimately reach the same critical diencephalic mechanism, as physiological psychologists generally believe (Glickman & Schiff, 1967; Olds, 1958b; Wise, Spindler, & Legault, 1978), food reinforcement must do so after a delay that is many times (perhaps orders of magnitude) longer than the delay of brain stimulation reinforcement. This argument rests on the assumption that it is largely the sensory impact of food that is reinforcing (Pfaffman, 1960); if, as once widely held, it were the post-ingestional reinforcing consequences of food that were most critical for control of behavior, then the delay of reinforcement would be much longer—many orders of magnitude longer for food reinforcement.

The delay of intravenous drug reinforcement is difficult to determine. Different drug injection systems deliver drug with different speeds, usually introduced through the jugular vein, mixing with blood in the heart which is routed through the lungs before proceeding to the brain. There it must cross membranes and diffuse into synaptic spaces. Nerve impulses from the tongue may well reach diencephalic structures as quickly as does blood-borne drug, and this may account for the fact that behavioral pharmacologists have not yet found it necessary to examine the importance of the delay of reinforcement or of response chaining factors that have been studied in relation to brain stimulation reinforcement; drug reinforcement and food reinforcement may turn out to have quite similar delays.

On the other hand, animals do learn about the post-ingestional caloric load of the substances they ingest (Epstein & Teitelbaum, 1962; Le Magnen, 1969), and information regarding that caloric load must reach the brain after very long delays during the digestion of food and its conversion to active metabolites. Whereas sensory information regarding food reaches the brain reasonably quickly, the ingested food itself, unlike injected drug, reaches the brain after considerable delays. Consideration of these delays may become important when comparisons are made between "regulation" of drug intake and "regulation" of food or fluid intake.

Quality of Reinforcement

The fact that drug reaches the brain largely (though not always completely) unchanged from its state in the syringe and activates central receptors in the membranes of neurons of reinforcement circuitry is likely to distinguish drug reinforcement from food reinforcement in an even more fundamental way than simply the speed of its detection. Food reinforcement is said to vary in quality as well as quantity. Drug reinforcement would appear to offer no analogue for what have been termed variations in quality of food reinforcement. In order to understand the implications of this apparent difference, it is necessary to consider carefully just what has been meant by variations in reinforcement quality.

Quality of reinforcement has been defined variously over the years. Usually, differences in quality are inferred from differences in preference for different reinforcers of the same class (different foods or water at different temperatures). Bolles (1975) points out, however, that a true shift in the quality of reinforcement would be a shift from one class (say, food) to another (water). This is not, however, the kind of shift that modern motivational theorists usually discuss under the topic "quality of reinforcement." It is generally food reinforcement that is discussed under this heading, and it is usually variations of the concentration of sweetener that are treated as variations in quality of reinforcement (Beck, 1978; Bolles, 1975).

While studies of this sort were originally considered to be studies of the quantity of reinforcement (e.g., Dufort & Kimble, 1956; Guttman, 1953; Young & Shuford, 1955), they have been reinterpreted as studies of quality of reinforcement (Marx, 1969; Schaeffer & Hanna, 1966; see also Bolles, 1975, p. 425, note 7) on the basis of the fact that it is the sweetness of the substance and not its caloric value that determines preference and operant response rate. Saccharin, which has no caloric value, has varying reinforcing impact as a function of concentration (Collier, 1962; Collier & Myers, 1961; Sheffield & Roby, 1950), just as do glucose and sucrose (Guttman, 1953; Pfaffman, 1960). Moreover, glucose and sucrose are reinforcing in proportion to their relative sweetnesses at various concentrations rather than in proportion to their relative caloric loads (Guttman, 1954). Modern motivation theory defines differences in the sensory impact of food reward as differences in quality and labels differences in nutritional value as differences in quantity (Beck, 1978; Bolles, 1975).

Because of the historical importance of drive reduction theories of reinforcement, it has been important to contrast the nutritional value of food with its sensory value (Pfaffman, 1960). It is unfortunate that the labels for this distinction became "quantity" and "quality," however, since sweetness is also readily measured on quantitative scales. It is not strictly accurate to discuss taste factors as qualitative rather than quantitative. Studies of taste quality have used concentration as a metric, and concentration is certainly a quantitative variable; indeed, the early studies comparing sugar solutions of different concentrations recognized it as such (Young & Shuford, 1955), identifying concentration with magnitude (Collier & Marx, 1959) or with amount (Dufort & Kimble, 1956) of reinforcement. Modern motivational theory might have done better to distinguish sensory and caloric value with different labels than quality and quantity, as concentration of sugar solutions is no less quantitative than calorie counts, and while calorie counts may measure something of relevance, it is questionable that they measure reinforcing impact per se.

It is the fact that food reinforcement is detected peripherally which allowed this modern distinction between quality and quantity to arise. If a simple glucose solution were the reinforcing substance, varying its concentration should logically be treated as varying the quantity, not quality, of reinforcement. The amount (concentration) of a sweetener at the taste receptor determines its sensory impact (Pfaffman, 1960). What if some more complex food substance were sweetened with an additive? Should a sweet 45 mg food pellet be considered to differ qualitatively or quantitatively from a bland 45 mg food pellet? The answer depends on what we define as the food component of the reinforcer. For example, if the 45 mg pellet were composed mostly of cellulose and sweetened with glucose, the amount of food substance would be determined by the amount of glucose (since there is no food value in cellulose). If a pellet of grain were sweetened with saccharin, the grain would represent the food; the sweetener is biologically inert. Here, the caloric value of the grain would probably be treated as quantity of food, and the concentration of the saccharin would probably be treated like the determinant of food quality. Similarly, the volume of water is probably to be treated as the quantity of water reinforcement, and its temperature, despite the fact that it is measured in degrees, is probably to be treated like a quality variable.

These conventions are clearly not well thought out in the reinforcement literature. They are based on the food or fluid value of the reinforcer rather than on its reinforcing impact. They presume the "amount" of reinforcement to be proportional to the biological utility of the goal object. The quantity of food reinforcement is conceded, in this view, to reflect its caloric value; the quantity of water reinforcement is conceded to reflect its hydrational value. These concessions cannot be logically justified. The fact that reinforcement accrues primarily from the sensory impact of the reinforcer means that the true quantity of reinforcement is at best only a correlate of its biological value; biological value is not necessarily well reflected in the sensory impact of a reinforcer. Water reinforcements of different temperature have different impact on behavior (Carlisle, 1977; Gold, Kapatos, Oxford, Prowse, & Quackenbush, 1974; Ramsauer, Freed, & Mendelson, 1974), though they have equal hydrational value. This means that temperature (degree of oral cooling), not volume, is the proper quantitative measure for the reinforcing value of water. Food reinforcements sweetened with saccharin have different behavioral impact though they have equal nutritional value (Sheffield & Roby, 1950). This means that concentration of sweetener, not caloric load, is the proper quantitative measure for the reinforcing value of sweet food. It is likely that the other sensory "qualities" of food, such as concentration of salt, should also be treated as quantitative variables; the power of science comes from the ability to quantify. The critical point here is that it is food’s behavioral impact, which is primarily sensory, which determines the strength of a food’s reinforcing impact. It is partly the fact that behavior is not governed by the biological utility of a reinforcer which has led to the consideration of quality of reinforcement as being distinct from its amount or quantity. The fact that hungry animals will work for saccharin solutions having no nutritional value (Sheffield & Roby, 1950) and the fact that thirsty animals will work for oral cooling that has no hydrational value (Mendelson & Chillag, 1970) should make it clear that the biological significance of a reinforcing substance need tell us little regarding the reinforcing intensity (quantity of reinforcement) associated with that substance.

However this larger problem is ultimately resolved, it remains the case that there is no real analogue in the study of intravenous drug reinforcement for variations in the sweetness of food or in the temperature of water. Varying the concentration of drug reinforcers does not so much alter their quality as it alters their intensity and duration of action. Inasmuch as the concentration of a drug is much diluted between its site of injection and its site of interaction with the nervous system, total dose rather than concentration of injected drug solution determines the drug’s concentration at the time it reaches its central site of action. The volume of vehicle originally used to dissolve the drug is of minimal importance. By contrast, variations in the concentration of a reinforcing sucrose or saccharin solution are detected at the taste buds before major dilution; both response rates and the firing rates of the taste nerve fibers are tightly correlated with the concentration of sucrose on the tongue (Pfaffman, 1960) rather than with the total dose given. The lack of a peripheral sensor for drug reinforcement thus poses a major difference between drug reinforcement and food or water reinforcement.

It might seem reasonable to think that variations in the molecular structure of a drug should cause a qualitative change in its reinforcing efficacy, particularly if preference measures are used as an index of quality. However, the most straightforward explanation of difference in potency of two agonists for the same receptor would be differences in receptor affinity. The only legitimate qualitative difference between drug reinforcers would seem to be a difference in drug class; drugs are qualitatively different if they act on different anatomical systems or at different receptors. In this case the analogue for differences between two drugs of different quality (say stimulants and narcotics) would be differences between reinforcer category (say food and water). Even stimulants and narcotics might be more fairly viewed as quantitatively different (different in amount) to the degree that their reinforcing effects are mediated by actions in a common neural circuit (Bozarth & Wise, 1981; Wise & Bozarth, 1984). True differences in quality in this case would be attributed to differences in "side effects" rather than to differences in reinforcing effect per se.

Quantity of Reinforcement

As should be apparent from the discussion of quality of reinforcement, quantity of reinforcement is not yet satisfactorily defined in the case of either food or water. The phrases "quantity of reinforcement" and "amount of reinforcement" do not have agreed meanings (Schaffer & Hanna, 1966). They have been used variously by different authors to reflect the total number of food pellets in training, the number of food pellets per trial, the number of licks or pellets per reinforcement, the weight of the pellet, or the volume of the dipper or food cup. Only the weight or volume dimensions are of interest here, where quantity of reinforcement is defined, for drug reinforcement, as the dose per injection or unit dose.

Where measures of quantity of food reinforcement are poorly defined, quantity of brain stimulation reinforcement is well defined. Here, the quantity of reinforcement is varied by changing the intensity, frequency, or duration of stimulation. Over the interesting ranges of these variables, there seems to be a reasonably linear trade-off between intensity and frequency (Gallistel, Shizgal, & Yeomans, 1981); the reinforcement mechanism seems reasonably indifferent to whether stimulation intensity is increased (increasing the number of fibers activated by each pulse) or stimulation frequency is increased (increasing the number of times a fixed set of fibers fire during each stimulation train). Whether intensity is varied (with frequency held constant) or frequency is varied (with intensity held constant), increases in the amount of reinforcement cause increased rates of responding to a point. With high levels of stimulation responding approaches an asymptote; in some cases response levels can even fall with further increases in stimulation intensity. The interesting range of reinforcement parameters is the range between threshold levels and the levels producing maximal behavioral output. Within that range there is a monotonic increase in response rate associated with increases in reinforcement magnitude. It is relatively easy to define quantity of reinforcement in the case of brain stimulation because we have reasonably good evidence of the effects of brain stimulation reinforcement on the firing patterns of the neurons that constitute the reinforcement mechanism of the brain (Gallistel et al., 1981).

A similar picture emerges with food reinforcement if we consider magnitude of reinforcement to vary with glucose, sucrose, or saccharin concentration in the tradition of the 1940s and 1950s. Rate of responding in simple FR-1 tasks increases monotonically over the interesting range of concentrations, starting at concentrations near the detection threshold and leveling off at concentrations that produce maximal response rates. At higher concentrations response rate can fall, particularly if long sessions are involved. The decrease seems largely due to post-ingestional (satiety) factors associated with sugars (Collier, 1962; Collier & Myers, 1961) and with bitter taste associated with high concentrations of saccharin (Pfaffman, 1960) rather than to variations in the factor of reinforcement magnitude itself. At the high concentrations where response rate levels off, so does the elicited rate of firing of the fibers of the taste nerve (Pfaffman, 1960); this, again, serves to delimit the range of interesting concentrations. Thus when concentration at the taste bud is viewed as a measure of the quantity of reinforcement, the relation between response rate and quantity of food reinforcement resembles the relation between response rate and quantity of brain stimulation reinforcement.

The case of drug reinforcement appears to be distinctly different. The general relation of response rate to quantity of drug reinforcement is suggested to be biphasic over the range of interesting unit doses with an ascending limb of increasing response rates associated with lower doses and a descending limb of decreasing response rates associated with higher doses. The range of doses associated with the ascending limb does not represent doses which produce reliably spaced responding, however. This dose range is associated with both within-subject and between-subject variability, with alternations between periods of very high response rates and periods of no responding. Thus most investigators do not report graded changes in responding across this portion of the dose range. Rather, when magnitude of reinforcement is systematically varied, most investigators vary it over the range of unit doses that are inversely related to response rate (Downs & Woods, 1974; Glick, Cox, & Crane, 1975; Pickens & Thompson, 1968; Woods & Schuster, 1968; Yokel & Pickens, 1973). When the ascending limb is represented in FR-1 studies, it is usually not really determined across a graded range of effective doses. Rather it is inferred from one to two low doses that fail to sustain responding at all and the first dose that does sustain responding—usually at the highest rate observed in the study (e.g., Glick & Cox, 1977, 1978; Glick, Cox, & Crane, 1975). In such cases the "ascending limb" is defined by drawing a line between points representing the effects of non-reinforcing doses and the point representing the first dose of the descending limb. This is really a trivial case of function which is truly well-defined as "biphasic" only when animals are required to respond on partial reinforcement schedules. When animals are required to make several responses for each injection, then there truly is an ascending limb to the dose-response curve (Balster & Schuster, 1973; Goldberg & Kelleher, 1976; Kelleher, 1976), though, even here, it is likely to include a small range of doses (see, e.g., Goldberg, 1973). In the case of drug reinforcement, it is the descending limb, not the ascending limb, that is most interesting. Responding in the descending limb of the function has been widely examined, even in the case of FR-1 responding. Here response rates are reliable both within and across subjects, and graded changes in response rate reliably accompany graded changes in injection dose (Yokel & Pickens, 1973, 1974). Thus, at least in the case of the widely studied and well-defined ranges of their effectiveness, increases in magnitude of food reinforcement (also brain stimulation reinforcement) cause increases in response rate, whereas increases in drug reinforcement cause decreases in response rate.

Does this signal a fundamental difference between drug reinforcement and other reinforcements, or is it merely an artifact of focusing on different portions of the function relating reinforcement magnitude to response rate? Perhaps the descending limb of the dose response curve in drug self-administration is analogous to the descending limb of the function relating reinforcement magnitude to response rate in saccharin or brain stimulation studies. This is a possibility which cannot be completely ruled out on the basis of present data. The "interesting" portions of the rate-concentration functions for food reinforcement and the rate-intensity and rate-frequency functions for brain stimulation reinforcement are anchored at the lower extreme by reinforcement thresholds. The threshold for drug reinforcement is not so readily determined, however. While some might argue that thresholds anchor the lower end of the ascending limb of the dose-response curves, my feeling is that threshold anchors the descending, not the ascending, limb. In our experience the lowest drug dose that maintains reliable responding does so at the highest observed response rate. Thus I would argue that the dose-response curve for drug self-administration is fundamentally different from the concentration-response curve for sucrose reinforcement; the one is a monotonically decreasing function across its interesting range, while the other is a monotonically increasing function across its interesting range. I would not, however, argue that this reveals a fundamental difference in the effects of magnitude of reinforcement for drug and food. Differences in the ability of drugs and foods to produce satiety are confounded with their magnitude of reinforcement in the usual paradigms, and it is satiety, more than reinforcement, that controls rate of responding for drug in most paradigms.

Satiating Bolus

In the case of brain stimulation reinforcement, the most probable explanation for the descending limb of the rate-intensity function is that stimulation spreads to adjacent systems, producing motoric artifacts or aversive side effects. It is not the case that high intensities satiate the animal. In the case of sugars and saccharin, the descending limb of the concentration-response curve has been related to bitter taste and post-ingestional factors, but, again, not to what is normally termed satiety (Collier & Myers, 1961). The descending limb of the dose-response curve for drug reinforcement, on the other hand, reflects the fact that drug, unlike food and brain stimulation, is usually given in immediately satiating doses. This point requires some elaboration.

In a typical FR-1 food reinforcement study, 22-hour deprived rats lever-press about 200 or 250 times without pausing except to eat their earned 45 mg food pellets (e.g., Wise, de Wit, Gerber, & Spindler, 1978). This number of pellets is earned and eaten in about 20 minutes, and at the end of this time the animals generally turn to grooming and then to sleeping. By contrast, in a typical FR-1 drug reinforcement study, rats lever-press much less frequently with relatively uniform pauses between responses (e.g., Yokel & Pickens, 1973). The pauses between responses can be extended by giving free infusions of drug (Pickens & Thompson, 1971), just as the intervals between meals can be extended by infusions of sugars (Nikolaidis & Rowland, 1976). Thus, "typical" food reinforcement and drug reinforcement paradigms differ critically in that each drug reinforcement is sufficient to cause a satiety period, while each food reinforcement constitutes less than 1% of a satiating meal.

This is not merely due to the fact that small amounts of food reinforcement and large amounts of drug reinforcement are given. In part, the difference in the satiating capacities of food pellets and drug injections is due to the different access that food and drug have to the brain and hormonal mechanisms that regulate satiety; in addition to the differences in delay of reinforcement already discussed, there is a difference in the delay of satiation between food reinforcement and drug reinforcement. Whereas intravenous drug reaches the mechanism of both its reinforcing and its satiating actions in seconds, ingested food reaches the peripheral site of reinforcing action in seconds but does not reach the central mechanism underlying satiety for a much longer time. Thus food which might be sufficient, after absorption and partial metabolism, to establish a substantial period of satiety has no such behavioral impact until long after additional food has usually been ingested (Davis, Gallagher, Ladlove, & Turausky, 1969). While there is a sensory component to food-related satiety (Mook, 1963), it is not analogous to the satiety produced by drug reaching and occupying its ultimate site of satiating action.

As mentioned earlier, mean rate of responding for drug varies inversely with dose per injection; the larger the injection the longer the period of satiety (Weeks & Collins, 1964; Yokel & Pickens, 1973, 1974). This is not so much an explanation of behavior as a definition of the term "satiety." Over the lower range of doses where mean response rate seems to increase with injection dose, drug is, by definition, non-satiating; animals respond immediately after each injection (or they do not respond at all).

One way to view the fact that increased magnitude of reinforcement causes increased rates of responding for brain stimulation and for food, while it causes decreased rates of responding for drugs, involves consideration of the brain mechanisms of reinforcement and the relative impact of food, stimulation, and drugs upon those mechanisms. Variations in concentration of sweet solutions or in the frequency of brain stimulation seem to alter the intensity of reinforcement. Thus animals prefer in choice tests the high currents (Hodos & Valenstein, 1962) and concentrations (Pfaffman, 1960) associated with high response rates in lever-press tasks. With food reinforcement and brain stimulation reinforcement, high response rates are associated with preferred magnitudes of reinforcement. With drug reinforcement such preferences are not necessarily seen (Yokel, this volume; but see Iglauer & Woods, 1974). Here increased magnitude of reinforcement seems not so much to alter the intensity of reinforcement as to alter its duration. The fact that animals have a good deal of opportunity to lever-press for more drugs suggests the same conclusion; if higher blood levels of drug were to increase the intensity of reinforcement, why would animals not merely respond more frequently?

It was once thought that more frequent responding might be prohibited by high-dose drug effects—either by aversive side effects or by incapacitating side effects (Wilson & Schuster, 1972). These hypotheses can now be ruled out. If high blood concentrations were aversive, then animals would prefer lower doses to larger ones, but such preferences are not seen. Rats seem to have no particular preference for low or high doses (Yokel, this volume), and monkeys prefer higher doses (Iglauer & Woods, 1974: Note that preference for high doses may not reflect greater intensity of reinforcement; monkeys may prefer higher doses for their longer duration, a property rats may not have the capacity to appreciate.). Thus it seems unlikely that aversive side effects limit drug intake significantly. It is also clear that motoric side effects do not constitute a limiting factor. Rats will lever-press several hundred times for brain stimulation in the interval between normal responses for drug when it is available on a concurrent schedule (Bozarth, Gerber, & Wise, 1980; Wise, Yokel, Gerber, & Hansson, 1977). It would thus appear that responding for drug is not limited actively by either aversive side effects or motoric incapacitation.

The fact that drug is given in an immediately satiating bolus while brain stimulation, food, and water are given in quanta that are not satiating (or at least not immediately satiating) seems to account for differences in the control of response rate by changes in unit dose. What, then, is the nature of this control in the unusual condition of drug reinforcement? Where an increase in the amount of brain stimulation per reinforcement leads to an increase in the number of responses per hour (Gallistel et al., 1981), an increase in the amount of drug reinforcement has the opposite effect. Rats generally compensate rather accurately for increased drug per injection, for alterations in drug isomer, or for changes in work requirements (Pickens, 1968; Pickens & Thompson, 1968; Yokel, this volume; Yokel & Pickens, 1973, 1974). Over a wide range of these variables, rats maintain a relatively constant hourly drug intake.

The dynamics of hourly drug intake have been particularly well studied in the stimulant self-administration paradigm. In this case response rate is governed largely by rate of metabolism of the drugs; the mean time between responses for different doses can be predicted from the metabolic kinetics of the drugs (Yokel & Pickens, 1974). Moreover, if drug is sampled at the time of each injection, blood concentrations reliably fall to the same threshold level—0.2 mg/ml of blood in the case of d-amphetamine—regardless of whether high or low doses are being earned (Yokel & Pickens, 1974). The rate of responding is accelerated or decelerated by treatments which accelerate or decelerate amphetamine metabolism, respectively (Dougherty & Pickens, 1974). Thus the factor which appears to limit drug intake is satiety; intake is limited passively when blood concentrations exceed satiating levels.

Overview

Rate of responding for drug reinforcement is under a seemingly unique control of magnitude of reinforcement. Whereas dose per injection is, over the interesting range of doses, inversely related to response rate for drug reinforcement, intensity and frequency per stimulation train and concentration of sweet solutions are, over the interesting ranges of these parameters, directly related to response rate for brain stimulation and sweet solutions, respectively. The unique dynamics of control of response rate for drug reinforcement may derive from a number of facts which distinguish drug reinforcement from food reinforcement. The most salient of these is that rate of responding for intravenous drug reinforcement is controlled by the duration of satiating action of the drug, which is felt immediately, while rate of responding for food or brain stimulation reward is controlled by some aspect of the intensity or quality of reinforcement which occurs either in the absence of satiety (in the case of brain stimulation) or occurs before the satiating consequences of ingestion are detected (in the case of food). There is no obvious analogue for graded intensity or quality in the case of drug reinforcement, perhaps because the reinforcing event is not sensed peripherally as in the case of food (Pfaffman, 1960) or trans-synaptically as in the case of brain stimulation (Gallistel et al., 1981). Whatever the explanation, changes in rate of responding for drug must be carefully interpreted, and this will be illustrated in subsequent sections. Care must also be taken in drawing parallels between drug-reinforced responding and responding maintained by more natural reinforcers. Just as differences in response chaining and delay of reinforcement must contribute to the differences between responding for brain stimulation and responding for food, so may similar factors contribute to unappreciated differences between responding for drugs and responding for food.

Central Manipulations that Influence Response Rate

Psychomotor stimulant self-administration in the rat is marked by many characteristics also found in brain stimulation reinforcement studies (Pickens & Harris, 1968). Both intravenous stimulants and intracranial electrical stimulation can be powerfully reinforcing, dominating rat behavior even in conflict situations. Rate of responding is regular in both cases, and responding is in both cases sustained for long periods without interruption. In both cases the rate of responding is independent of previous abstinence periods, and in both cases spontaneous abstinence periods occur with unpredictable onset and duration (Pickens & Harris, 1968).

Much has been made of the fact that stimulant self-administration increases when dopaminergic synapses are blocked with pimozide, butaclamol or other neuroleptics (de Wit & Wise, 1977; Yokel & Wise, 1975, 1976). There are two reasons for the attention to this finding. First, it clearly tells us something neuroleptics are not doing; they are not rendering the animals incapable of initiating voluntary movement or of organizing complex goal-directed behavior. They are not simply causing the catalepsy that can be seen in certain testing conditions (Janssen, Dresse, Lenaerts, Niemegeers, Pinchard, Schaper, Schellekens, Van Nueten, & Verbruggen, 1968). They are not impairing the animals such that they cannot perform at their normal response levels. Similar response increases are not seen with other reinforcers such as brain stimulation (Fouriezos & Wise, 1976) or food (Wise et al., 1978a, 1978b), and in these cases the possibility of motoric impairment has been a major issue (Wise, 1982). The fact that this issue could be readily resolved in the case of stimulant self-administration was thus one important reason for the attention to neuroleptic-induced response accelerations.

The second reason that the response accelerations have received so much consideration is that they appear (with somewhat less certainty) to suggest something about the nature of what is occurring. The response accelerations parallel the effects of reinforcement reductions, and they thus suggest that neuroleptics cause such reductions in the stimulant self-administration paradigm. While this conclusion cannot be drawn with complete certainty—it is much more clear that the animals are free of motor deficits than they are impaired by reinforcement deficit—nonetheless it has been generally accepted (see, e.g., Ettenberg, Bloom, Koob, & Pettit, 1982; Lyness, Friedle, & Moore, 1979; Roberts, Corcoran, & Fibiger, 1977; Roberts & Koob, 1982; Roberts, Koob, Klonoff, & Fibiger, 1980). Indeed, the view that increased (compensatory) responding must reflect a decrease in reinforcing stimulant impact has been accepted so globally that it has seemingly become the sine qua non of the paradigm. Some workers (e.g., Ettenberg et al., 1982) seem to hold that rate increases are not only a sufficient condition for inferring a decrease in the reinforcing impact of intravenous drug but that they are also a necessary condition for such a conclusion. This position will be examined more closely in the next section.

The rate increases that are sometimes seen when stimulant self-administration is challenged with neuroleptics (de Wit & Wise, 1977; Yokel & Wise, 1975, 1976) or with lesions of dopaminergic reinforcement mechanisms (Roberts et al., 1980) take two forms. With minimal challenges (low doses of neuroleptics), there is a sustained increase in drug intake which lasts in proportion to the dose of the challenge drug. Here the animal behaves as if the neuroleptic were a competitive antagonist at the critical receptor; the animal maintains a higher than normal concentration of stimulant in the blood, as would be required to displace neuroleptic molecules from dopamine receptors in the CNS. The rat normally responds for d-amphetamine whenever blood concentration falls to about 0.2 mg/ml (Yokel & Pickens, 1974); under low doses of neuroleptics, rats respond more frequently (Yokel & Wise, 1975, 1976), thus initiating responses when a higher concentration threshold is crossed. Because amphetamine is metabolized by first order kinetics—the higher the blood level the faster the metabolism—the animals must not only respond more in order to initially elevate blood concentration above 0.2 mg/ml, but they must also continue to respond faster to maintain concentration above this level.

When higher doses of neuroleptic are given (Yokel & Wise, 1975, 1976), or in some cases when dopamine systems are lesioned (Roberts et al., 1980), responding is biphasic. The initial phase is a period of accelerated responding; it is followed by a period of non-responding. This parallels what is seen when reinforcing injections are terminated. The interpretation is that the animal is unable to earn sufficient drug, during the period of accelerated responding, to counteract the high dose neuroleptic treatment; thus responding ceases. The early phase of accelerated responding is not simply a period when the neuroleptic is only partially absorbed (and thus acting like a low dose); since when stimulant access is delayed until after peak central neuroleptic action has been reached, the same period of initial acceleration is still seen in a significant portion of cases (Yokel & Wise, 1976).

Interpreting Monophasic Decreases in Rate of Self-Administration

Much has been made of compensatory increases in drug intake that are seen when psychomotor stimulant self-administration is challenged with drugs that impair dopaminergic function (Ettenberg et al., 1982; Pickens, Meisch, & Dougherty, 1968; Wilson & Schuster, 1972; de Wit & Wise, 1977; Yokel & Wise, 1975, 1976). It is a great help to interpretation when rate increases are observed in response to a pharmacological challenge or experimental lesion that might otherwise be suspected to impair performance capacity in some way. It is dangerous, however, to draw any firm conclusion when there is a failure to see compensatory rate increases; failure to see rate increases might be caused by any number of factors and cannot, by itself, be interpreted. This might be argued on purely logical grounds, but there are good data to illustrate the point.

Specialists are agreed that psychomotor stimulant reinforcement depends on stimulant actions at one or more sets of dopaminergic synapses in the forebrain (Baxter et al., 1974, 1976; Davis & Smith, 1975; Ettenberg et al., 1982; Lyness et al., 1979; Risner & Jones, 1976, 1980; Roberts et al., 1977, 1980; Roberts & Koob, 1982; Roberts & Zito, this volume; Spyraki, Fibiger, & Phillips, 1982; de Wit & Wise, 1977; Yokel, this volume; Yokel & Wise, 1975, 1976, 1978). The accelerated responding for amphetamine which is caused by neuroleptics is one cornerstone of this conclusion, but there are others. One critical corroborative fact is that humans report decreased amphetamine euphoria after neuroleptics (Gunne, Anggard, & Jonsson, 1972). Another is that rats no longer work for amphetamine or cocaine when dopamine systems are lesioned (Lyness et al., 1979; Roberts et al., 1977, 1980; Roberts & Koob, 1982) or when neuroleptics are injected into dopamine terminal fields in the brain (Phillips & Broekkamp, 1980). Finally, apomorphine and piribedil, selective dopamine receptor agonists, have amphetamine-like reinforcing effects of their own (Baxter et al., 1974; Davis & Smith, 1977; Yokel & Wise, 1978). Had we required that dopamine antagonism or dopaminergic lesions cause compensatory increases in stimulant intake, however, these last two lines of evidence might not have been interpreted correctly.

First, lesions of the nucleus accumbens did not, at least in the first experiments (Lyness et al., 1979; Roberts et al., 1977), cause compensatory increases in amphetamine or cocaine intake. They simply blocked acquisition of the lever-pressing habit in naive animals (Lyness et al., 1979) and caused such responding to decrease in trained animals (Roberts et al., 1977). What does the lack of compensatory increases in responding reflect? In the case of naive animals, the lack of compensatory increases obviously means nothing; responding is expected to show compensatory increases only in well-trained animals. But what about the trained animals? If one were predisposed to link response increases in one-to-one fashion with reinforcement reduction, one might conclude that the lesion failed to reduce the reinforcing impact of the drug. If, on the other hand, one did not have such a predisposition, one might reasonably infer that the lesion had perhaps blocked the reinforcing drug effect. In point of fact, Roberts, who did the initial lesion study, was tenacious enough to try variations on the paradigm until compensatory increases were demonstrated in a number of animals (Roberts et al., 1980). Response increases in these cases rule out the possibility that the lesions merely impair response capability; note that this conclusion does not require response increases in all animals under all testing conditions.

The second informative case involves apomorphine self-administration. Apomorphine is self-administered in much the same way as is amphetamine (Baxter et al., 1974; Davis & Smith, 1977; Yokel & Wise, 1978; Wise et al., 1976). Apomorphine is a selective dopaminergic agonist, so it would be surprising if its reinforcing action were mediated somewhere other than the dopaminergic synapse. If its reinforcing action were mediated at the dopaminergic synapse, then the reinforcing effects should be blocked by neuroleptics (which block post-synaptic receptors) and not by alpha-methyltyrosine (which blocks dopamine synthesis). Indeed, dopamine synthesis blockade has no effect on apomorphine self-administration (Baxter, Gluckman, & Scerni, 1976), and neuroleptics block apomorphine self-administration (Yokel & Wise, 1978). However, once again acceleration of apomorphine self-administration is not seen (Yokel & Wise, 1978). Low doses of neuroleptics (which cause accelerated amphetamine self-administration; Yokel & Wise, 1976) have no effect on apomorphine self-administration, while high doses cause responding to drop out without an early phase of accelerated responding. The reasons for the lack of accelerated responding are not clear, but no one has suggested this fact to imply that the mechanism of apomorphine reinforcement involves a non-dopaminergic action of the drug.

How, then, should one interpret the findings of Ettenberg et al. (1982) that flupenthixol did not cause sustained increases in heroin self-administration at moderate doses (which did cause increases in cocaine self-administration) and did not cause complete cessation of responding for heroin at a high dose (that did cause cessation of cocaine self-administration)? How should one interpret the fact that naltrexone, on the other hand, caused increased responding for heroin but not for cocaine? Ettenberg et al. conclude from these findings that two independent reinforcement mechanisms are activated, one by heroin and another by cocaine; they take their data to refute the view (Bozarth & Wise, 1981; Wise & Bozarth, 1984) that opiates and stimulants act at serial elements in a common reinforcement substrate. Their conclusion rests primarily on the interpretation of the effects of flupenthixol on heroin self-administration, since naltrexone would not be expected to alter cocaine intake by either theory (It acts "upstream" from the site of stimulant action in the proposed circuit.).

While the effects of naltrexone on heroin self-administration, the lack of effect of naltrexone on cocaine self-administration, and the effects of flupenthixol on cocaine self-administration are all compatible with the notion that cocaine acts downstream from the site of heroin action in a common reinforcement circuit (Wise & Bozarth, 1984), such a view demands that neuroleptics, which act at the postulated final common path, block the reinforcing affects of both drugs. Ettenberg et al. (1982) saw reduced heroin self-administration with their highest doses of flupenthixol; thus the Ettenberg et al. argument that flupenthixol fails to block the reinforcing effects of heroin rests largely on the fact that flupenthixol failed to increase heroin self-administration in the way that it increased cocaine self-administration. Is such a conclusion justified?

If it is granted that a rate increase proves a reinforcement reduction (a vulnerable assumption, but one that Ettenberg et al. seem prepared to make), it nonetheless fails to follow logically that the absence of a rate increase proves the absence of a reinforcement reduction. There are several alternative explanations of the absence of rate increases in lesioned or neuroleptic-treated animals, one of which is that flupenthixol causes (in addition to any effect on reinforcement mechanisms) response debilitation, as Ettenberg, Bloom, and Koob (1981) argue elsewhere. One might ask why flupenthixol would impair heroin self-administration at the same doses that increased cocaine self-administration; one possibility is mentioned elsewhere in the Ettenberg et al. (1982) paper: Cocaine antagonizes the effects of flupenthixol (but not naltrexone) by increasing dopamine concentrations at its receptor. Cocaine, a stimulant, would thus be expected to antagonize the sedative side effects of the neuroleptic, while heroin, a depressant, would be expected to augment them. Whether or not this particular alternative explanation is valid, it is important to remember that failure to find evidence for a hypothesis is not necessarily evidence against that hypothesis. Failure to find a smoking gun is not proof of innocence.

The question of whether reinforcement impairment is necessarily reflected in response acceleration is an empirical question, and there are data of relevance available. There are clear cases where neuroleptics fail to cause response acceleration but are nonetheless thought to attenuate intravenous drug reinforcement. The first has been mentioned; pimozide at low doses fails to accelerate responding for apomorphine, though it does, at high doses, cause apomorphine self-administration to cease (without an "extinction burst" of initial high-rate responding; Yokel & Wise, 1978). A second example is evident in Ettenberg et al.’s data; the high dose of flupenthixol caused cessation of cocaine self-administration without any "extinction burst" of accelerated responding such as is seen when pimozide, rather than flupenthixol, is used (de Wit & Wise, 1977). Ettenberg et al. (1982) do not suggest that the failure of flupenthixol to cause acceleration prior to cessation raises any question about the ability of flupenthixol to block cocaine reinforcement. A third comes from our experience with opiate antagonists; many of our animals cease responding for heroin without any response acceleration after naloxone or naltrexone (Bozarth & Wise, unpublished observations). Moreover, while Roberts et al. (1980) were able to find instances of response acceleration in their lesioned animals (when time for recovery was given before testing), they did not see such acceleration in all of their animals.

Thus it seems unwarranted to take the absence of response acceleration as evidence that neuroleptics fail to attenuate intravenous drug reinforcement. As shown by the case of apomorphine, neuroleptics can block drug reinforcement with neither sustained response increase at low neuroleptic doses nor early acceleration before extinction at high neuroleptic doses. Lesions can similarly block stimulant reinforcement without compensatory increases. The presence of compensatory increases in drug intake should probably not be treated as a sufficient condition for inferring a decrease in reinforcing impact; such increases should certainly not be treated as a necessary condition for such inferences. Monotonic decreases in drug intake—decreases without initial accelerations and without associated increases when lower doses are tested—must then be treated as inconclusive evidence by themselves. It is for this reason that our inference of decreased opiate reward under conditions of neuroleptic challenge was not based on intravenous self-administration evidence but on evidence from the conditioned place preference paradigm. It is the fact that neuroleptics block opiate-conditioned place preference (Bozarth & Wise, 1981; Spyraki, Fibiger, & Phillips, 1983), taken with the fact that neuroleptics in sufficient dose cause rats to stop lever-pressing for intravenous heroin (Bozarth & Wise, unpublished observations), that led us to believe that dopaminergic systems play a critical role in opiate reinforcement.

Interpreting Shifts in Biphasic Dose-Response Curves

Changes in drug intake following brain lesions are difficult to interpret (see Roberts & Zito, this volume). Even when changes are not monotonic—even when drug intake increases at some unit doses and decreases at others—there is no simple rule for deciding whether decreased intake means stronger or weaker reinforcement. Interpretation of lesion effects cannot rest simply on parallels with other challenges or with other reinforcers.

Glick et al. (1975) have shown that rats take less intravenous morphine after lesions of the caudate nucleus. They see a shift in the entire dose-response curve over a range of effective doses which span the descending limb of the dose effect curve and which include one low dose that sustained responding before but not after the lesions. They have interpreted these data to mean that caudate lesions "increase sensitivity to the rewarding effect of morphine" (p. 222). Here is another case where interpretation of changes in response rate is complicated. As pointed out by Glick et al., there are several viable hypotheses as to the significance of their data; one possibility suggested by consideration of the mechanism of opiate reinforcement is almost opposite to the conclusion of Glick et al.

Opiates act at receptor sites embedded in neural membranes. The action of opiates at their receptors seems complex (Barker, Macdonald, Neale, & Smith, 1978), but ultimately their effect is to enhance or to inhibit activity in neural circuits. When receptors are blocked by an opiate antagonist, increased opiate intake might be expected; such intake would increase the opiate concentration and compensate for a fixed concentration of a competitive antagonist by displacing the antagonist from the opiate receptors. One of the findings which lends credence to the assumption that reduced sensitivity to the reinforcing effect of morphine results in an increase in self-administration rate is the finding that opiate antagonists do, at low and moderate doses, often cause increased responding for opiates (Ettenberg et al., 1982; Goldberg, Schuster, & Woods, 1971).

Can the opposite effect (decreased responding) be taken as reflecting the opposite condition (increased sensitivity to reinforcing opiates)? It would seem logical to assume so if the decreased responding were caused by some treatment known to produce increased receptor number or affinity. Chronic naloxone or naltrexone might be expected to cause opiate supersensitivity by increasing receptor affinity or number (Tang & Collins, 1978); release from such treatment might be expected to produce a decrease in the normal hourly intake of morphine in a self-administered paradigm. What about the case where some of the target neurons are destroyed by an electrolytic lesion? Is it reasonable to expect such a treatment to produce an analogous state? The lesion would decrease the number of available receptors by killing the cells on which the receptors are localized. How could such a manipulation cause an increase in sensitivity to morphine? Glick and his colleagues presumed (Glick & Cox, 1975; Glick et al., 1975) morphine to be a dopamine antagonist, and they suggested the interpretation that the lesion increases morphine sensitivity by decreasing the number of dopaminergic terminals where morphine must have inhibitory effects in order to produce a reinforcing state of affairs. Such an interpretation is not as straightforward as it might seem, however, since decreasing the number of neurons that morphine inhibits would not alter the critical concentration of drug at the receptors of the remaining neurons; nor would it seem likely to alter the receptor number or receptor affinity on those neurons. Thus, thinning the receptor population by lesioning the receptor substrate is not directly analogous to the effects of chronic receptor blockade. It is difficult to imagine how thinning the neural population containing the receptors for a given drug might increase the sensitivity for that drug. It is also difficult to understand how morphine would create a reinforcing state of affairs by inhibiting dopaminergic function when amphetamine, cocaine, and apomorphine seem to create a reinforcing state of affairs by stimulating dopaminergic function (Baxter et al., 1974, 1976; Davis & Smith, 1975, 1977; Ettenberg et al., 1982; Lyness et al., 1979; Risner & Jones, 1976, 1980; Roberts & Koob, 1982; Roberts et al., 1977, 1980; de Wit & Wise, 1977; Yokel & Wise, 1975, 1976, 1978).

Is there any more attractive explanation for the decrease in response rate seen when the presumed mechanism of drug reinforcement is thinned by lesioning? One possibility comes from comparing drug reinforcement with brain stimulation reinforcement. When the number of reward fibers activated by reinforcing stimulation is decreased (This is the main consequence of reducing the stimulation current or of making a lesion at the tip of the stimulating electrode.), the animals show decreased responding for stimulation (Hawkins & Pliskoff, 1964; van Sommers & Teitelbaum, 1974). Changing stimulation current, like lesioning the reward substrate, decreases the number of fibers activated in the reinforcement mechanism. This is believed to cause a decrease in the intensity of reinforcement, since it decreases the contribution of spatial summation at the next synapse in the circuit (Gallistel et al., 1981). One might ask why animals do not compensate by increasing their rate of responding just as they compensate for decreased drug reinforcement in the self-administration paradigm. The answer can perhaps never be known, but in the case of brain stimulation reinforcement, as in the case of reduced concentration of food reinforcement, they do not.

Perhaps animals take less morphine after caudate lesions because, as in the case of reduced current in the brain stimulation paradigm, lesions reduce the intensity of reinforcement. Whereas neuroleptics or reduced unit doses cause reduced reinforcement duration—which can be counteracted by taking drug more frequently—lesions reduce the intensity of drug reinforcement by reducing the number of neurons influenced by the reinforcer. Changes in the number of neurons influenced by the reinforcing drug would change the impact of the drug in a way that cannot be reversed by compensatory responding. Suppose 50% of the opiate reward mechanism were eliminated; could doubling the opiate concentration restore the full drug effect? Suppose only 5% of the opiate reward mechanism were left intact; could any amount of drug restore full drug impact? To suggest that increased frequency of injection can offset less drug per injection is to state the obvious. To suggest that increased frequency of injection and the resulting increase in drug concentration in the body can offset the effects of a competitive antagonist seems almost equally obvious. It seems quite unlikely, on the other hand, that increased frequency of injection could offset damage to the population of neurons influenced by the reinforcing drug. In a situation where there is no plausible mechanism for the ability of increased response frequency to compensate for an experimental manipulation, changes in drug intake should probably not be inferred to reflect compensatory mechanisms.

Thus, in the case of caudate lesions, it seems unlikely that decreased drug intake means an increase in drug impact. Decreased drug intake reflects increased drug impact in cases where impact can be equated with duration of effective action. In a case where decreased intake cannot be attributed to increased duration of effective action, the possibility of decreased intensity of drug effect must be considered. In the case of damage to the substrate of the drug effect, decreased intensity of drug effect is much more likely than increased duration of drug effect. Thus, once again, it is clear that changes in response rate require careful interpretation. It cannot be assumed that all biphasic decreases in response rate reflect increased drug sensitivity, just as it cannot be assumed that all monophasic decreases in response rate reflect performance debilitation.

Summary and Conclusions

There are several ways that drugs behave like other reinforcers. When a series of chained responses is required to earn drug and when the intake of drug lags appreciably behind such chained responses, responding for drug is likely to resemble responding for food. Habits learned under partial reinforcement are likely to be more difficult to break than those learned under continuous reinforcement; responding under regular reinforcement is likely to be cyclic in rate whereas responding under irregular reinforcement is likely to be regular in rate. Secondary reinforcers are likely to sustain responding in the absence of primary reinforcement and drug-associated cues are likely to reinstate responding when it has been extinguished.

There are other ways that drug reinforcement differs from food reinforcement and brain stimulation reinforcement. When satiating doses of drug are given, each is more like a meal than like a food morsel, and the pauses between injections are more likely to resemble the pauses between meals than the pauses between bites. Response rate varies inversely with drug dose in this case, and antagonists of the drug in question are likely to produce response increases though they will not always do so. When they do, the behavior sustained by drug reinforcement will respond in a manner opposite to that sustained by food reinforcement or brain stimulation reinforcement. When the response mechanism is thinned by a lesion, on the other hand, the behavior of drug-reinforced animals is likely to shift in the opposite direction—now in parallel with the effects of lesions or antagonist challenge of brain stimulation reinforcement or of food reinforcement.

Because rate of drug self-administration can sometimes be increased and sometimes be decreased by pharmacological or surgical intervention, one cannot determine simply from a change in response rate whether drug reinforcement has been enhanced or attenuated. Food reinforcement and brain reinforcement are each inadequate models for interpreting some challenges of drug self-administration, and neither can be applied without careful consideration. It is not prudent to draw inferences about the impact of central manipulations on drug reinforcement without considering both similarities and differences between drug reinforcement and other forms of reinforcement. Only with converging evidence from independent sources can changes in drug self-administration rate be safely interpreted. It seems generally more reasonable to infer a decrease in reinforcing drug impact from an increase in self-administration rate than to infer an increase in drug impact from a decrease in self-administration rate; indeed, the interpretation of decreases in response rate is a difficult undertaking regardless of the reinforcer. Parallels from one class of reinforcer to another are not likely to be useful unless they are buttressed with very careful analysis.

References

Balster, R. L., & Schuster, C. R. (1973). Fixed-interval schedule of cocaine reinforcement: Effect of dose and infusion duration. Journal of the Experimental Analysis of Behavior, 20, 119-129.

Barker, J. L., Macdonald, R. L., Neale, J. H., & Smith, T. G. (1978). Opiate peptide modulation of amino acid responses suggests novel form of neuronal communication. Science, 199, 1451-1453.

Baxter, B. L., Gluckman, M. I., & Scerni, R. A. (1976). Apomorphine self-injection is not affected by alpha-methylparatyrosine treatment: Support for dopaminergic reward. Physiology & Behavior, 4, 611-612.

Baxter, B. L., Gluckman, M. I., Scerni, R. A., & Stein, L. (1974). Self-injection of apomorphine in the rat: Positive reinforcement by a dopamine receptor stimulant. Pharmacology Biochemistry & Behavior, 2, 387-391.

Beck, R. C. (1978). Motivation. New York: Prentice-Hall.

Bolles, R. C. (1975). Theory of motivation, 2nd Edition. New York: Harper & Row.

Bozarth, M. A., Gerber, G. J., & Wise, R. A. (1980). Intracranial self-stimulation as a technique to study the reward properties of drugs of abuse. Pharmacology Biochemistry & Behavior, 13(Suppl. 1), 245-247.

Bozarth, M. A., & Wise, R. A. (1981). Heroin reward is dependent on a dopaminergic substrate. Life Sciences, 29, 1881-1886.

Carlisle, H. J. (1977). Temperature effects on thirst: Cutaneous or oral receptors? Physiological Psychology, 5, 247-249.

Collier, G. (1962). Some properties of saccharin as a reinforcer. Journal of Experimental Psychology, 64, 184-191.

Collier, G., & Marx, M. H. (1959). Changes in performance as a function of shifts in the magnitude of reinforcement. Journal of Experimental Psychology, 57, 305-309.

Crum, J., Brown, W. L., & Bitterman, M. E. (1951). The effect of partial and delayed reinforcement on resistance to extinction. American Journal of Psychology, 64, 228-237.

Davis, J. D., Gallagher, R. J., Ladlove, R. F., & Turausky, A. J. (1969). Inhibition of food intake by a humoral factor. Journal of Comparative and Physiological Psychology, 67, 407-414.

Davis, W. M., & Smith, S. G. (1977). Catecholaminergic mechanisms of reinforcement: Direct assessment by drug self-administration. Life Sciences, 20, 483-492.

Deutsch, J. A. (1963). Learning and electrical self-stimulation of the brain. Journal of Theoretical Biology, 4, 193-214.

de Wit, H., & Wise, R. A. (1977). Blockade of cocaine reinforcement in rats with the dopamine receptor blocker pimozide, but not with the noradrenergic blockers phentolamine or phenoxybenzamine. Canadian Journal of Psychology, 31, 195-203.

Dougherty, J., & Pickens, R. (1974). Effects of phenobarbital and SKF 525A on cocaine self-administration in rats. Drug Addiction, 3, 135-143.

Downs, D. A., & Woods, J. H. (1974). Codeine- and cocaine-reinforced responding in rhesus monkeys: Effects of dose on response rates under a fixed ratio schedule. Journal of Pharmacology and Experimental Therapeutics, 191, 179-188.

Dufort, R. H., & Kimble, G. A. (1956). Changes in response strength with changes in the amount of reinforcement. Journal of Experimental Psychology, 51, 185-191.

Epstein, A. N., & Teitelbaum, P. (1962). Regulation of food intake in the absence of smell, taste, and other oropharyngeal sensations. Journal of Comparative and Physiological Psychology, 55, 753-759.

Ettenberg, A., Bloom, F. E., & Koob, G. F. (1981). Response artifact in the measurement of neuroleptic-induced anhedonia. Science, 213, 357-359.

Ettenberg, A., Bloom, F. E., Koob, G. F., & Pettit, H. O. (1982). Heroin and cocaine intravenous self-administration in rats: Mediation by separate neural systems. Psychopharmacology, 78, 204-209.

Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. New York: Appleton-Century-Crofts.

Fouriezos, G., & Wise, R. A. (1976). Pimozide-induced extinction of intracranial self-stimulation: Response patterns rule out motor or performance deficits. Brain Research, 103, 377-380.

Gallistel, C. R., Shizgal, P., & Yeomans, J. S. (1981). A portrait of the substrates for self-stimulation. Psychological Review, 88, 228-273.

Gibson, W. E., Reid, L. D., Sakai, M., & Porter, P. B. (1965). Intracranial reinforcement compared with sugar-water reinforcement. Science, 148, 1357-1358.

Glick, S. D., & Cox, R. D. (1975). Self-administration of haloperidol in rats. Life Sciences, 16, 1041-1046.

Glick, S. D., & Cox, R. D. (1977). Changes in morphine self-administration after brain-stem lesions in rats. Psychopharmacology, 52, 151-156.

Glick, S. D., & Cox, R. D. (1978). Changes in morphine self-administration after tel-diencephalic lesions in rats. Psychopharmacology, 57, 283-288.

Glick, S. D., Cox, R. D., & Crane, A. M. (1975). Changes in morphine self-administration and morphine dependence after lesions of the caudate nucleus. Psychopharmacologia, 41, 219-224.

Glickman, S. E., & Schiff, B. B. (1967). A biological theory of reinforcement. Psychological Review, 74, 81-109.

Gold, R. M., Kapatos, G., Oxford, T. W., Prowse, J., & Quackenbush, P. M. (1973). Role of water temperature in the regulation of water intake. Journal of Comparative and Physiological Psychology, 85, 52-63.

Goldberg, S. R. (1973). Comparable behavior maintained under fixed-ratio and second-order schedules of food presentation, cocaine injections of or d-amphetamine injection in the squirrel monkey. Journal of Pharmacology and Experimental Therapeutics, 186, 18-30.

Goldberg, S. R., & Kelleher, R. T. (1976). Behavior controlled by scheduled injections of cocaine in squirrel and rhesus monkeys. Journal of the Experimental Analysis of Behavior, 25, 93-104.

Griffiths, R. R., Brady, J. V., & Bradford, L. D. (1979). Predicting the abuse liability of drugs with animal self-administration procedures. In T. Thompson & P. E. Dews (Eds.), Advances in behavioral pharmacology (Vol. 2, pp. 39-73). New York: Academic Press.

Gunne, L. M., Anggard, E., & Jonsson, L. E. (1972). Clinical trials with amphetamine-blocking drugs. Psychiatria, Neurologia, Neurochirurgia, 75, 225-226.

Guttman, N. (1953). Operant conditioning, extinction, and periodic reinforcement in relation to concentration of sucrose used as reinforcing agent. Journal of Experimental Psychology, 46, 213-224.

Guttman, N. (1954). Equal-reinforcement values for sucrose and glucose solutions compared with equal sweetness values. Journal of Comparative and Physiological Psychology, 47, 358-361.

Hawkins, T. D., & Pliskoff, S. S. (1964). Brain stimulation intensity, rate of self-stimulation, and reinforcement strength: An analysis through chaining. Journal of the Experimental Analysis of Behavior, 7, 285-288.

Hodos, W., & Valenstein, E. S. (1960). Motivational variables affecting the rate of behavior maintained by intracranial stimulation. Journal of Comparative and Physiological Psychology, 53, 502-508.

Hoebel, B. G., & Teitelbaum, P. (1962). Hypothalamic control of feeding and self-stimulation. Science, 135, 375-377.

Holder, W. B., Marx, M. H., Holder, E. E., & Collier, G. (1957). Response strength as a function of delay in a runway. Journal of Experimental Psychology, 53, 316-323.

Iglauer, C., & Woods, J. H. (1974). Concurrent performances: Reinforcement by different doses of intravenous cocaine in rhesus monkeys. Journal of Experimental Analysis of Behavior, 22, 179-196.

Janssen, P. A. J., Dresse, A., Lenaerts, F. M., Niemegeers, C. J. E., Pinchard, A., Schaper, W. K. A., Schellekens, K. H., Van Nueten, J. M., & Verbruggen, F. J. (1968). Pimozide, a chemically novel, highly potent and orally long-acting neuroleptic drug. Arzneimittel-Forschung, 18, 261-279.

Johanson, C. E. (1978). Drugs as reinforcers. In D. E. Blackman & D. J. Sanger (Eds.), Contemporary research in behavioral pharmacology (pp. 325-390). New York: Plenum Press.

Kelleher, R. T. (1976). Characteristics of behavior controlled by scheduled injections of drugs. Pharmacological Reviews, 27, 307-323.

Kelleher, R. T., & Goldberg, S. R. (1975). Control of drug-taking by schedules of reinforcement. Pharmacological Reviews, 27, 291-299.

Le Magnen, J. (1969). Peripheral and systemic actions of food in the caloric regulation of intake. Annals of the New York Academy of Sciences, 57, 1126-1156.

Logan, F. (1952). The role of delay of reinforcement in determining reaction potential. Journal of Experimental Psychology, 43, 393-399.

Lyness, W. H., Friedle, N. M., & Moore, K. E. (1977). Destruction of dopaminergic nerve terminals in nucleus accumbens: Effect on d-amphetamine self-administration. Pharmacology Biochemistry & Behavior, 11, 553-556.

Marx, M. H. (1969). Positive contrast in instrumental learning from qualitative shift in incentive. Psychonomic Science, 16, 254-255.

Marx, M. H., McCoy, D. F., & Tombaugh, J. W. (1965). Resistance to extinction as a function of constant delay of reinforcement. Psychomonic Science, 2, 333-334.

McIntyre, R. W., & Wright, J. E. (1965). Differences in extinction in electrical brain-stimulation under traditional procedures of reward presentation. Psychological Reports, 16, 909-913.

Mendelson, J. (1966). The role of hunger in T-maze learning for food by rats. Journal of Comparative and Physiological Psychology, 62, 341-349.

Mendelson, J., & Chillag, D. (1970). Tongue cooling: A new reward for thirsty rodents. Science, 170, 1418-1420.

Moltz, H. (1965). Contemporary instinct theory and the fixed action pattern. Psychological Review, 72, 27-47.

Mook, D. G. (1963). Oral and post-ingestional determinants of the intake of various solutions in rats with esophageal fistulas. Journal of Comparative and Physiological Psychology, 56, 645-659.

Morgan, M. (1974). Resistance to satiation. Animal Behavior, 22, 449-466.

Nikolaidis, S., & Rowland, N. (1976). Metering of intravenous versus oral nutrients and regulation of energy balance. American Journal of Physiology, 231, 661-668.

Olds, J. (1956). Runway and maze behavior controlled by basomedial forebrain stimulation in the rat. Journal of Comparative and Physiological Psychology, 49, 507-512.

Olds, J. (1958a). Effects of hunger and male sex hormones on self-stimulation of the brain. Journal of Comparative and Physiological Psychology, 51, 320-324.

Olds, J. (1958b). Satiation effects in self-stimulation of the brain. Journalof Comparative and Physiological Psychology, 51, 675-678.

Panksepp, J., & Trowill, J. A. (1967). Intraoral self injection: Effects of delay of reinforcement on resistance to extinction and implications for self-stimulation. Psychonomic Science, 9, 405-406.

Peterson, L. R. (1956). Variable delayed reinforcement. Journal of Comparative and Physiological Psychology, 49, 232-234.

Pfaffman, C. (1960). The pleasures of sensation. Psychological Review, 67, 253-268.

Phillips, A. G., & Broekkamp, C. L. (1980). Inhibition of intravenous cocaine self-administration by rats after microinjections of spiroperidol into the nucleus accumbens. Society for Neuroscience Abstracts, 6, 105.

Pickens, R. (1968). Self-administration of stimulants by rats. InternationalJournal of the Addictions, 3, 215-221.

Pickens, R., & Harris, W. (1968). Self-administration of d-amphetamine by rats. Psychopharmacology, 12, 158-163.

Pickens, R., Meisch, R. A., & Dougherty, J. A. (1968). Chemical interactions in methamphetamine reinforcement. Psychological Reports, 23, 1267-1270.

Pickens, R., & Thompson, T. (1968). Cocaine-reinforced behavior in rats: Effects of reinforcement magnitude and fixed-ratio size. Journal of Pharmacology and Experimental Therapeutics, 161, 122-129.

Pickens, R. & Thompson, T. (1971). Characteristics of stimulant drug reinforcement. In T. Thompson & R. Pickens (Eds.), Stimulus properties of drugs (pp. 177-192). New York: Appleton, Century, Crofts.

Pliskoff, S. S., Wright, G. E., & Hawkins, D. T. (1965). Brain stimulation as a reinforcer: Intermittent schedules. Journal of the Experimental Analysis of Behavior, 8, 75-80.

Ramsauer, S., Freed, W. J., & Mendelson, J. (1974). Effects of water temperature on the reward value and satiating capacity of water in water-deprived rats. Behavioral Biology, 11, 381-393.

Risner, M. E., & Jones, B. E. (1976). Role of noradrenergic and dopaminergic processes in amphetamine self-administration. Pharmacology Biochemistry & Behavior, 5, 477-482.

Risner, M. E., & Jones, B. E. (1980). Intravenous administration of cocaine and norcocaine by dogs. Psychopharmacology, 71, 83-89.

Roberts, D. C. S., Corcoran, M. E., & Fibiger, H. C. (1977). On the role of ascending catecholamine systems in intravenous self-administration of cocaine. Pharmacology Biochemistry & Behavior, 6, 615-620.

Roberts, D. C. S., & Koob, G. F. (1980). Disruption of cocaine self-administration following 6-hydroxydopamine lesions of the ventral tegmental area in rats. Pharmacology Biochemistry & Behavior, 17, 901-904.

Roberts, D. C. S., Koob, G. F., Klonoff, P., & Fibiger, H. C. (1980). Extinction and recovery of cocaine self-administration following 6-OHDA lesions of the nucleus accumbens. Pharmacology Biochemistry & Behavior, 12, 781-787.

Schaeffer, R. W., & Hanna, B. (1966). Effects of quality and quantity of reinforcement upon response rate in acquisition and extinction. Psychological Reports, 18, 819-829.

Schuster, C. R. (1970). Psychological approach to opiate dependence and self-administration by laboratory animals. Federation Proceedings, 29, 2-5.

Schuster, C. R., & Balster, R. L. (1973). Self-administration of agonists. In H. W. Kosterlitz, H. O. J. Collier, & J. E. Villarreal (Eds.), Agonist and antagonist actions of narcotic analgesic drugs. Baltimore: University Park Press.

Schuster, C. R., & Johanson, C. E. (1981). An analysis of drug-seeking behavior in animals. Neuroscience & Biobehavioral Reviews, 5, 315-323.

Schuster, C. R., & Thompson, T. (1969). Self-administration of and behavioral dependence on drugs. Annual Review of Pharmacology, 9, 483-502.

Seward, J. P., Uyeda, A. A., & Olds, J. (1959). Resistance to extinction following intracranial self-stimulation. Journal of Comparative and Physiological Psychology, 52, 294-299.

Seward, J. P., Uyeda, A. A., & Olds, J. (1960). Reinforcing effect of brain stimulation on runway performance as a function of interval between trials. Journal of Comparative and Physiological Psychology, 53, 224-227.

Sheffield, F. D., & Roby, T. B. (1950). Reward value of a non-nutritive sweet taste. Journal of Comparative and Physiological Psychology, 43, 471-481.

Sidman, M., Brady, J. V., Conrad, D. G., & Schulman, A. (1955). Reward schedules and behavior maintained by intracranial self-stimulation. Science, 122, 830-831.

Skinner, B. F. (1935). The generic nature of the concepts of stimulus and response. Journal of General Psychology, 12, 40-65.

Skinner, B. F. (1938). The behavior of organisms. New York: Appleton.

van Sommers, P., & Teitelbaum, P. (1974). Spread of damage produced by electrolytic lesions in the hypothalamus. Journal of Comparative and Physiological Psychology, 86, 288-299.

Spealman, R. D., & Goldberg, S. R. (1978). Drug self-administration by laboratory animals: Control by schedules of reinforcement. Annual Review of Pharmacology and Toxicology, 18, 313-339.

Spyraki, C., Fibiger, H. C., & Phillips, A. G. (1982). Dopaminergic substrates of amphetamine-induced place preference conditioning. Brain Research, 253, 185-193.

Spyraki, C., Fibiger, H. C., & Phillips, A. G. (1983). Attenuation of heroin reward in rats by disruption of the mesolimbic dopamine system. Psychopharmacology, 79, 278-283.

Tang, A. H., & Collins, R. J. (1978). Enhanced analgesic effects of morphine after chronic administration of naloxone in the rat. European Journal of Pharmacology, 47, 473-474.

Thorndike, E. L. (1911). Animal intelligence. New York: Macmillan.

Tombaugh, T. N., Tombaugh, J., & Anisman, H. (1979). Effects of dopamine receptor blockade on alimentary behaviors: Home cage food consumption, magazine training, operant acquisition, and performance. Psychopharmacology, 66, 219-225.

Trowill, J. A., Panksepp, J., & Gandelman, R. (1969). An incentive model of rewarding brain stimulation. Psychological Review, 76, 264-281.

Weeks, J. R., & Collins, R. J. (1964). Factors affecting voluntary morphine intake in self-maintained addicted rats. Psychopharmacologia, 6, 267-279.

Wilson, M. C., & Schuster, C. R. (1972). The effects of chlorpromazine on psychomotor stimulant self-administration in the rhesus monkey. Psychopharmacologia, 26, 115-126.

Wise, R. A. (1974). Lateral hypothalamic electrical stimulation: Does it make animals hungry? Brain Research, 67, 187-209.

Wise, R. A. (1982). Neuroleptics and operant behavior: The anhedonia hypothesis. Behavioral and Brain Sciences, 5, 39-87.

Wise, R. A., & Bozarth, M. A. (1984). Brain reward circuitry: Four circuit elements "wired" in apparent series. Brain Research Bulletin, 12, 203-208.

Wise, R. A., de Wit, H., Gerber, G. J., & Spindler, J. (1978). Neuroleptic-induced "anhedonia" in rats: Pimozide blocks the reward quality of food. Science, 201, 262-264.

Wise, R. A., Spindler, J., & Legault, L. (1978). Major attenuation of food reward with performance-sparing doses of pimozide in the rat. Canadian Journal of Psychology, 32, 77-85.

Wise, R. A., Yokel, R. A., & de Wit, H. (1976). Both positive reinforcement and conditioned taste aversion from amphetamine and apomorphine in rats. Science, 191, 1273-1275.

Wise, R. A., Yokel, R. A., Gerber, G. J., & Hansson, P. (1977). Concurrent intracranial self-stimulation and intravenous amphetamine self-administration in rats. Pharmacology Biochemistry & Behavior, 7, 459-461.

Woods, J. H., & Schuster, C. R. (1968). Reinforcement properties of morphine, cocaine, and SPA as a function of unit dose. International Journal of the Addictions, 3, 231-237.

Woodworth, R. S. (1918). Dynamic psychology. New York: Columbia University Press.

Yokel, R. A., & Pickens, R. (1973). Self-administration of optical isomers of amphetamine and methylamphetamine by rats. Journal of Pharmacology and Experimental Therapeutics, 187, 27-33.

Yokel, R. A., & Pickens, R. (1974). Drug level of d- and l-amphetamine during intravenous self-administration. Psychopharmacologia, 34, 255-264.

Yokel, R. A., & Wise, R. A. (1975). Increased lever pressing for amphetamine after pimozide in rats: Implications for a dopamine theory of reward. Science, 187, 547-549.

Yokel, R. A., & Wise, R. A. (1976). Attenuation of intravenous amphetamine reinforcement by central dopaminergic blockade in rats. Psychopharmacology, 48, 311-318.

Yokel, R. A., & Wise, R. A. (1978). Amphetamine-type reinforcement by dopamine agonists in the rat. Psychopharmacology, 58, 289-296.

Young, P. T., & Shuford, E. H. (1955). Quantitative control of motivation through sucrose solutions of different concentrations. Journal of Comparative and Physiological Psychology, 48, 114-118.


©1987 Springer-Verlag (printed version)
©2000-2009 Addiction Science Network (web-enhanced version)



ASNet Home

ASNet Profile

Illicit Drug Index

Virtual Lab Tour

Research Reports

Drug Classification

A Primer on Drug Addiction

Experimental Methods

Treatment Resources

Biological Basis of Addiction

Before Prohibition: Early Psychoactive Medicines

Links to Other Websites

Click here to enter the Addiction Science Network Discussion Forum

Brain Reward System©1999-2009 Addiction Science Network
This page was last revised 05 April 2009 13:25 EDT.
Send comments to: feedback@AddictionScience.net
Report technical problems to: webmaster@AddictionScience.net