Applied Behavior Analysis Blog
Tact Extensions
Tact extensions allow to label stimuli in various ways!
Tact Extensions: There Are Many Ways To Label Something!
Tact Extensions:
Once a tact has been established, the tact response can occur under novel stimulus conditions through the process of stimulus generalization. In other words, there are many ways to label one stimuli. Skinner (1957) identifies four different levels of generalization based on the degree to which novel stimuli share the relevant or defining features of the original stimulus. These four types of tact extensions are generic, metonymical, solistic, and metaphorical.
Generic Tact Extension: A tact evoked by a novel stimulus that shares all of the relevant or defining features associated with the original stimulus. Example: For generic tact extensions it's easiest to think of stimulus generalization. An example would be seeing a Dunkin Donuts and then seeing Starbucks and labeling them both a coffee shop. This would be one response for both stimuli.
Metonymical Tact Extension: A tact evoked by a novel stimulus that shares none of the relevant features of the original stimulus configuration, but some irrelevant yet related feature has acquired stimulus control. Some examples include seeing an empty cup and saying “water” or seeing a black cat and later calling black kettle “cat.” In these scenarios the client made some associations between the two stimuli even though they do not share any of the relevant features of the original stimulus.
Solistic Tact Extension: A verbal response evoked by a stimulus property that is only indirectly related to the proper tact relation. An example of this would be the use of “slang” language. For example if you walk into someone and say “Oh Goodness, I am so sorry,” and they respond back, “you good” instead of saying “that's okay” or “no problem.” Another example would be saying something along the lines of “I ain’t got no time for that.” This is slang language which is used in place of proper language to relay the same meaning.
Metaphorical Tact Extension: A tact evoked by a novel stimulus that shares some, but not all, of the relevant features of the original stimulus. Essentially this simply means using metaphors. Some examples would be saying a “test was as easy as pie” or “time is money.”
Cooper, J. O., Heron, T. E., & Heward, W. L. (2019). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
Restitutional Overcorrection
Restitutional overcorrection is a positive punishment procedure in which contingent on problem behavior the client is required to restore the environment back to its original state and then make it even better than it was before.
Restitutional Overcorrection: A Positive Punishment Procedure
As avid learners of ABA, we are quick to find out that there are a lot of terms to remember, but on top of that it’s important to be able to associate terms with a bigger category. Well, here’s one of them….restitutional overcorrection falls under the category of positive punishment. This is because the punishing agent is adding something in order to decrease the future frequency of problem behavior.
In restitutional overcorrection contingent on the problem behavior, the learner is required to repair the damage caused by the problem behavior. Therefore, the client needs to restore the environment back to its original state first, then the client is required to engage in additional behavior to improve the environment to a state better than it was before the problem behavior occurred. I always remember restitutional overcorrection as a person not being able to “rest” because they are required to do MORE than just restore the environment.
An example of restitutional overcorrection is when a kiddo comes home and runs right up the stairs and tracks mud all through the house, his parents now have him clean up the mud, THEN wax all the floors and polish the windows. In this scenario the mud was the only thing the kiddo was responsible for was tracking mud, but he was required to bring the environment not only back to its original state, but now to an even better state than it was before.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2019). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
EO vs. AO
Establishing Operations vs. Abolishing Operations
Establishing operations are motivating operations that momentarily increase the effectiveness of some stimulus, object or event as a reinforcer. For example,food deprivation establishes food as an effective reinforcer. An abolishing operations do the opposite as they are motivating operations that momentarily decrease the effectiveness of some stimulus, object or event as a reinforcer. For example, food satiation establishes food as not an effective reinforcer. In other words; when you're full you're not going to find food very motivating.
For example: If there is a glass of water on Mrs. Miller’s desk that untouched glass of water serves as an SD which simply means that it is signifying that reinforcement is available (if she so chooses to drink it). As she keeps lecturing to her students, she begins to get parched (state of deprivation for water) she then has an EO to pick up the water and drink it (now becomes a MO) once she is no longer thirsty (satiated from the water) which she then has an AO to put the glass of water down which it resumes its role as an SD and is no longer an MO.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
Sequence Effects vs. Multiple Treatment Interference
Sequence effects are when affects from a previous condition affect the IV in the following condition whereas Multiple Treatment Interference is when one treatment conflicts with another when they are being presented at the same time.
Overview: Sequence effects occur when the effects of an intervention from one condition carry over into the next condition. Sequence effects are typically the result of a multiple treatment reversal design or a B-A-B reversal design. Sequence effects can skew the data in the following condition because the data is not going to display an accurate depiction of what is really happening. In order to reduce this you would need to continue to take data until the sequence effects subside. The experimental design used to minimize sequence effects would be an alternating treatment design where all treatments are running independently and simultaneously. It is important to note that minimizing sequence effects is an advantage of using an alternating treatment design whereas a disadvantage of using an alternating treatment design would be multiple treatment interference. This is always an issue with alternating treatment designs because of the unnatural nature of switching between multiple treatments. Multiple treatment interference is when the effects of one treatment on a subject's behavior are being confounding by the influence of another treatment administered in the same study. This means that one treatment is affecting the other as the treatments are being alternated, not from a previous condition but instead, during the process of alternating or switching between 2-4 behaviors.
The main difference: As stated above, the main difference between the two concepts is that with sequence effects are when the effects of an intervention are affecting the next condition whereas with multiple treatment interference the effects of one treatment on a subject's behavior are being confounding (so let’s say confused) by the influence of another treatment. In simpler terms; sequence effects are when a treatment is affected by a previous condition and multiple treatment interference is when the treatments are affected by interference between treatments. With multiple treatment interference the treatments are interfering with each other and thus making it difficult to see which intervention is the most effective. Furthermore; sequence effects are specific to reversal designs while multiple treatment interference is specific to alternating treatment designs.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
Automatic Reinforcement
Automatic Reinforcement is reinforcement that is not socially mediated by others
Automatic Reinforcement: Auto = Self!
Automatic reinforcement refers to reinforcement that occurs independent of the social mediation of others. Response products that function as automatic reinforcement are often in the form of a naturally produced sensory consequence that, “sound good, looks good, tastes good, smells good, feels good to touch, or the movements itself is good.” (Rincover, 1981; Cooper, Heron, and Heward, 2019). An example would be scratching an insect bite to relieve the itchy sensation you are feeling. With automatic reinforcement, the person is able to reinforce themselves. Some examples are; scratching an itch, cracking knuckles, watching the lights go on and off, eating your favorite cookies, etc.
If another person does not play a role in the function of the behavior then the behavior would be automatically reinforced. However; if another person does play a role in the function of the behavior then this would be considered socially mediated reinforcement. For example if the function of little Kimmy’s behavior is to seek her mothers attention and her mother gives it to her this would be socially-mediated reinforcement but if the function of little Kimmy’s behavior is to get access to a cookie simply because she loves how it tastes and she retrieves it herself, this would be automatic reinforcement.
Another example describing automatic reinforcement would be to turn on the radio yourself as opposed to asking your pal to do it for you. If you had a pal turn the radio on for you this would be socially-mediated positive reinforcement instead of automatic reinforcement.
Practitioners can also determine if a behavior is automatically reinforced when a behavior persists in the absence of any known reinforcer. For example; instances of self-stimulatory behaviors or stereotypy. This is because these behaviors can produce sensory stimuli that function as reinforcement….automatic reinforcement that is!
Cooper, J. O., Heron, T. E., & Heward, W. L. (2019). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
Vaughan, M. E., & Michael, J. L. (1982). Automatic Reinforcement: An Important but Ignored Concept. Behaviorism, 10(2), 217–227.
Functional Analysis
Functional Analysis is a part of the FBA process in which a practitioner manipulates antecedents and consequences in the clients environment to determine the function of challenging behavior.
Functional Analysis: Find The Function Of The Challenging Behavior!
A functional analysis is the part of the Functional Behavior Assessment (FBA) process that confirms the hypothesis as to what the function of the challenging behavior is. This is the hardest part of the FBA as it yields the most information and consists of experimentally manipulating the environment to test for the function of the challenging behavior. Antecedents and consequences representing those in the client's natural environment are arranged so that their separate effects on problem behavior can be observed and measured (Cooper, Heward & Heron, 2007). In order to do this, the practitioner implements a procedure that sets up specific conditions based on the following four conditions; play, escape, attention and alone condition. By determining which condition produces the highest frequency of behavior, practitioners can be confident that this condition serves as the function of the challenging behavior.
Running the Conditions:
During an FA, the practitioner is testing one condition at a time.
Attention Condition: In the attention condition there is only the practitioner and the client in the room. The practitioner will only give the client attention when the client engages in the challenging behavior. Then attention is removed. This allows the practitioner to determine if the challenging behavior is contingent on access to attention.
Escape Condition: In the escape condition, there is only the practitioner and the client in the room. The practitioner will deliver task demands, when problem behavior occurs the demands are removed. This allows the practitioner to determine if the challenging behavior is contingent on access to escaping demands.
Play Condition: The client is permitted to play. This is the control condition, in which the problem behavior is low because reinforcement is freely available and no demands are placed on the client.
Alone Condition: The practitioner and the client are alone in the room as the client is engaging in an activity. The practitioner is to give no attention or demands to the client to ensure that the behaviors being observed are not being reinforced by social mediation of others.
If problem behavior occurs frequently in all conditions, or is variable across conditions, responding is considered undifferentiated in which results are inconclusive and the function of the problem behavior is automatic reinforcement or cannot be determined.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Functional analysis of problem behavior: a review. Journal of applied behavior analysis, 36(2), 147–185.
Compound Schedules of Reinforcement
Compound Schedules of Reinforcement are schedules of reinforcement consisting of 2 or more elements of continuous reinforcement, the 4 intermittent schedules of reinforcement, differential reinforcement of various rates of responding and extinction.
Compound Schedules of Reinforcement: Defined and Applied
In Applied Behavior Analysis practitioners can combine two or more basic schedules of reinforcement to form compound schedules of reinforcement. These schedules consist of continuous reinforcement, intermittent schedules of reinforcement, differential reinforcement of various rates of responding and extinction. It is important to note that basic compound schedules can occur simultaneously or successively and can occur with or without an SD.
There are various types of compound schedules of reinforcement, continue reading below to find out more:
Multiple Schedule of Reinforcement: This is when there are two or more schedules of reinforcement for one behavior that are each presented with different discriminative stimuli. For example a third grade kiddo, Jake, was working on his multiplication facts. When he worked with his math teacher he was required to get 12/20 multiplication facts correct to receive reinforcement but when he was working with his math tutor he had to get 17/20 correct to receive reinforcement. Therefore; the schedule of reinforcement was dependent on which person he was working with (the SD). He could either get reinforcement on an FR 12 or FR 17 schedule based on which SD was present.
Mixed Schedule of Reinforcement: This is when two or more schedules of reinforcement for one behavior are each presented without any discriminative stimulus. Therefore; the reinforcement is delivered in a random order in which the client does not know when they will be reinforced. This maintains that the client's behavior will continue to occur at a high rate. For example, Leslie was working on eating her vegetables with the BCBA, Thomas. Leslie sometimes received reinforcement for eating a spoon full of vegetables, she sometimes received reinforcement for taking 5 bites of her vegetables. The kiddo does not know which schedule of reinforcement is in effect at any given time so her behavior will continue to occur at a high rate.
Chained Schedule of Reinforcement: This compound schedule of reinforcement has two or more basic schedule requirements that occur successively, and have a discriminative stimulus correlated with each schedule. This schedule always occurs in a specific order and the first behavior expectation serves as a discriminative stimulus for the next behavior expectation, and so on, For example, when my recipe box gets delivered to my house every Tuesday, I follow the recipe card (the SD) placing one ingredient in the pot after the next in the specific order that the recipe card demonstrates. In addition, I complete this chain in about 20-30 minutes.
Tandem Schedule of Reinforcement: This compound schedule of reinforcement is the same exact reinforcement schedule as chained however; there is no discriminative stimulus associated with it. Therefore; there is no specific order associated with this schedule. For example, the following week I received my recipe box and this time they forgot to include the recipe card. I am left to figure out the recipe myself. I still have to put the food in the pot in some order to cook the food, just not a specific order. In addition, I complete this recipe in about 20-25 minutes. The trick with tandem schedules of reinforcement is that the behaviors still occur in an order; however it can be ANY order rather than a specified order.
Concurrent Schedule of Reinforcement: This compound schedule of reinforcement consists of two or more schedules of reinforcement, each with a correlated discriminative stimulus, operating independently and simultaneously for two or more behaviors. Concurrent schedules of reinforcement allow the client to have a choice which is essentially governed by the matching law. The matching law states that “behavior goes where reinforcement flows.” This means that the schedule associated with the stronger reinforcement will occasion the behavior to engage in that schedule of reinforcement. For example, if I offer my client the reinforcement of getting a half hour of video game playing if he sits with me in the lunchroom, or an hour of video game playing if he socializes and sits with his peers in the lunchroom (terminal behavior), my client is going to choose to socialize and sit with his peers (even if this is not his preferred activity) because he wants to engage in the behavior that will grant him the stronger reinforcer (1 hour of video games vs. a ½ hour).
Conjunctive Schedule of Reinforcement: This compound schedule of reinforcement is when reinforcement follows the completion of two or more simultaneous schedules of reinforcement. For example, little Nancy must work on her math homework for five minutes and get 10 questions correct in order to receive reinforcement.
Alternative Schedule of Reinforcement: This compound schedule of reinforcement is when reinforcement follows the completion of either or schedule. This schedule consists of two or more simultaneously available component schedules. the client will receive reinforcement when they reach the criterion for either schedule of reinforcement. For example, a client of mine is currently working on an alternative schedule where he can either work quietly in his seat for five minutes, or he can complete five math problems. He receives reinforcement contingent on reaching the criterion for either one; it does not matter which schedule he meets as long as he meets one or the other. The first one completed provides reinforcement, regardless of which schedule component is met first.
Adjunctive Behaviors (schedule-induced behaviors): Behaviors that come about when compound schedules of reinforcement are in place. These behaviors come about when reinforcement is not likely to be delivered. When a kiddo is waiting to get reinforced they fill-in their time with another irrelevant behavior. So in the meantime he/she might doodle on their pad or pop her bubble gum. These are considered time-filling or schedule-induced behaviors.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.
Component Analysis
A Component Analysis is any experimental design to identify the active elements of a treatment condition, the relative contributions of different variables in a treatment package, and/or the necessary/sufficient components of an intervention.
Component Analysis: Explained
When a behavior intervention consists of multiple components (treatment/behavioral package) and the practitioner manipulates each component to see which one is most effective for the client. A component analysis is an experimental design to identify the active elements of a treatment package, the relative contributions of different components in a treatment package, and/or the necessity and sufficiency of treatment components (Cooper, Heron & Heward, 2007).
Essentially, a component analysis is when a practitioner is analyzing a treatment package to determine which treatment is most effective by analyzing which treatment is most efficiently affecting the dependent variable. A component analysis attempts to determine which part of an independent variable is responsible for behavior change. There’s one golden rule: change only variable at a time.
There are two methods for conducting component analyses; an add-in component analysis and a drop-out component analysis:
Add-in Component Analysis: Components are assessed individually or in combination before the complete treatment package is presented. This method can identify sufficient components.
Drop-out Component Analysis: The experimenter presents the treatment package and then systematically removes components. If the treatment's effectiveness wanes when a component is removed then the experimenter has identified a necessary component.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.