Reinforcement can be given according to a schedule. When each and every correct response is reinforced, the process of reinforcement is called “continuous reinforcement schedule”. A situation that is most common in every-day life and which can be easily studied in a laboratory is the one in which only a certain proportion of correct responses are reinforced.
Schedules of reinforcement have been most extensively studied in the operant situation devised by Skinner. Continuous reinforcement is usually used during the initial stages of operant conditioning.
After the response have been learned, it can be maintained by a’ schedule of reinforcement. Schedules of reinforcement can be arranged in several ways. The delivery of reinforcement may be made contingent upon the number, rate or pattern of responses or it may also depend upon time. There are four schedules of reinforcement, namely:
ADVERTISEMENTS:
(a) Fixed Ratio Schedule;
(b) Fixed Interval Schedule;
(c) Variable Ratio Schedule; and
ADVERTISEMENTS:
(d) Variable Interval Schedule.
We would discuss each of these briefly—
(a) Fixed Ratio Schedule (FR):
It is one example of a schedule in which the number of responses determine when reinforcement occurs. A certain number of responses must be made before a reinforcer is produced i.e., there is a fixed ratio of non-reinforced responses to reinforced responses, e.g., every third (ratio of 3: 1). fourth (4:1) or hundredth (100:1) response might be reinforced. Under Fixed Ratio schedule, pause occurs after each reinforcement, but except for this the rate of responses tends to be quite high and relatively steady.
(b) Fixed Interval Schedule (FI):
It is one in which reinforcement is given after a fixed interval of time. No reinforcements are forthcoming, no matter how many responses are made until a certain interval of time has gone by.
(c) Variable Ratio Schedule (VR):
ADVERTISEMENTS:
It is one in which subjects are paid off after a variable number of responses. For instance, reinforcement might come once after two responses, again after ten responses and so on after six responses and so on after different number of responses as decided by the experimenter. A variable ratio schedule can be specified in terms of the average number of responses needed for reinforcement.
(d) Variable Interval Schedule (VI):
It is one in which the individual is reinforced first after one interval of time, then after another interval and so on.
An important consequence of many schedules of positive reinforcement is that other things being equal, extinction tends to be slower for scheduled reinforced responses than for continuously reinforced ones.
In other words, if positive reinforcement is stopped, the individual continues to respond for a much longer time after scheduled reinforcement than after continuous reinforcement. In technical language, we say that scheduled reinforcement increases resistance to extinction.