Jeffrey A. Singer and Adam Omary
According to the National Institute of Mental Health, half of all American adolescents have met diagnostic criteria for at least one mental disorder at some point in their lifetime. Roughly 31 percent of American college students are now taking at least one psychiatric medication.
As we have previously written, there are perverse financial incentives for psychiatric overdiagnosis in the American healthcare system. The same is true for prescribing psychiatric medicine, with far more dire consequences.
The Dependency Problem
When a system encourages large numbers of people to start psychiatric medications, it should also be clear about what happens when they try to stop. The answer depends on the drug class—and it is often more complicated than the prescribing culture tends to acknowledge.
Benzodiazepines—Xanax, Ativan, Klonopin, Valium—are the clearest case. They are also among the most widely prescribed psychiatric drugs in the country, with roughly four million daily users in the United States, many of whom meet diagnostic criteria for substance dependence. Physical dependence develops quickly: withdrawal symptoms are possible after only one month of daily use, and patients on long-term therapy can experience a protracted, uncomfortable withdrawal syndrome lasting months. The drugs activate the same dopaminergic reward pathways as other addictive substances, and the acute risks—cognitive impairment, motor incoordination, falls, and a sharply elevated overdose risk when combined with opioids or alcohol—are well documented.
A growing literature on protracted withdrawal suggests that a meaningful subset of long-term users report symptoms—low energy, distractedness, memory loss, anxiety—that persist for a year or longer after they stop. None of this is hidden in the literature; it is hidden in the prescription pad. Benzodiazepines continue to be handed out for chronic anxiety despite clinical guidance, decades old, that they should be used briefly or not at all.
Antidepressants present a subtler problem, and one that the field has spent years trying to define out of existence. The pharmaceutical industry, with the active cooperation of much of organized psychiatry, settled on the term discontinuation syndrome to describe what happens when patients stop taking antidepressants—a phrase deliberately chosen to avoid the connotations of withdrawal. Calling it “discontinuation” instead of “withdrawal” doesn’t change what patients experience. A 2024 systematic review and meta-analysis in The Lancet Psychiatry estimated the incidence of discontinuation symptoms ranges from 3 percent to 31 percent, averaging 15 percent, depending on the specific drug and severity.
Antidepressants do not produce drug-seeking behavior in the way addictive substances do, but they do create physical dependence as the brain adapts to their presence; withdrawal can occur after stopping nearly every class of antidepressant. Patients who have taken SSRIs for years often discover, when they try to stop, that tapering can take many additional months and that some symptoms—sensory disturbances, dizziness, emotional volatility—linger well beyond the official timetable. Whether this is called dependence or something else is a question of branding more than biology.
The point is not that these drugs are useless or that no one should take them. For severe depression and disabling anxiety, the benefits are real and sometimes lifesaving. The point is that the current prescribing regime, having absorbed an ever-larger share of the population into long-term treatment, has done so under marketing language that systematically downplayed how hard it would be to leave. A patient who started an SSRI in college for moderate situational distress, was never reassessed, and now finds herself unable to taper off ten years later, has been failed by the system in a particular way—not because the drug is uniquely sinister, but because the institutional pressures all run in one direction: toward starting, continuing, and adding, never toward stopping.
Stimulants are a different case and worth treating separately.
Stimulants and the Diagnostic Pipeline
Stimulant prescriptions in particular have climbed for years; Axios recently reported on a Trilliant Health analysis that found prescriptions for Attention Deficit/Hyperactivity Disorder (ADHD) medications such as Adderall and Vyvanse nearly doubled among commercially insured women ages 18 to 44 in the years following the pandemic.
These drugs are powerful cognitive stimulants that produce real, measurable improvements in alertness, motivation, and short-term task performance in most people who take them, regardless of whether those people meet criteria for ADHD. The clinical case for restricting them to patients with diagnosed ADHD, therefore, depends heavily on how reliably we can distinguish ADHD from ordinary variation in attention. and that distinction has grown progressively blurrier with each revision of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM).
Expanding the Definition
Successive revisions to the DSM—particularly DSM-IV and DSM‑5—have broadened diagnostic boundaries for ADHD, increasing the number of people who meet the criteria. Studies comparing DSM-III and DSM-IV criteria show that substantially more people meet the newer definitions, in part because DSM-IV introduced inattentive presentations and expanded symptom profiles, changes associated with higher prevalence estimates (e.g., prevalence rising from 9.6 percent under DSM-III to 17.8 percent under DSM-IV in one study, and 7.3 percent to 11.4 percent in another, largely due to newly identified inattentive cases). DSM‑5 further widened eligibility by raising the age-of-onset threshold to 12 and lowering symptom requirements for adults, increasing diagnoses, particularly among adolescents and adults. Still, these definitional changes explain only part of the overall rise.
Once a clinician treats subjective complaints about attention as the gateway to stimulants such as amphetamines, two things follow. First, patients learn to describe their experiences in the language that produces a prescription. Second, clinicians, working in fee-for-service systems where a diagnosis makes the visit billable, have little institutional reason to push back. As we have detailed in our analysis of how the American healthcare system rewards psychiatric overdiagnosis, when payment depends on diagnosis and the criteria are subjective, the boundaries of illness expand. Stimulants are the cleanest illustration: a class of drugs whose use has scaled in lockstep with the broadened criteria that now justify it.
Stimulants on College Campuses
The other half of the picture is what already happens informally. The same molecules that adults must obtain a clinical diagnosis to access—often after a brief telehealth visit with a generalist—are widely traded informally on college campuses, where students take their friends’ leftover pills to study for exams. The current regulatory regime manages to be restrictive and permissive in the worst possible combination: difficult enough to obtain that it requires medicalizing ordinary attentional variation, and easy enough to obtain that the resulting prescriptions diffuse well beyond their intended patients.
Surveys of American college students consistently find that double-digit percentages report nonmedical use of prescription stimulants at some point during their education, with much higher rates at academically competitive institutions and during finals weeks. Most of these students are not addicts. They are healthy young adults using a drug to study, in much the way previous generations used coffee, nicotine, or, less wisely, much harder substances. They obtain the pills from peers who do have prescriptions, often acquired through exactly the diagnostic pipeline described above.
The result is a kind of de facto deregulation, mediated through medical fiction. The drug reaches the people who want it, but only after passing through a clinical apparatus that pretends otherwise. Students lie, or shade the truth, to clinicians who have professional reasons not to look too closely. Insurance dollars subsidize the supply. Friends share. The Drug Enforcement Administration tightens production quotas in response to the resulting shortages, which then squeeze legitimate patients with severe ADHD—the very population the gatekeeping was nominally designed to prioritize.
A Libertarian Diagnosis
There are two coherent positions on stimulants. The current policy manages neither and fails on its own terms.
One position holds that amphetamines are too dangerous for general adult use and should be tightly restricted, in which case the diagnostic gates should actually mean something—and the casual prescribing patterns that produced a near-doubling of stimulant prescriptions among working-age women in a few years would represent a serious regulatory failure.
A more sensible approach is to treat competent adults as capable of making their own decisions about cognitive enhancers, weighing the risks against the benefits. It is hard to justify paternalistic prescription requirements for drugs that millions of people have used for decades.
Either of those positions would be more honest and more humane than the current hybrid, in which the drugs are simultaneously prescription-gatekept and overprescribed, and in which the diagnostic apparatus exists less to identify illness than to launder access. The hybrid is also worse than either pure alternative for the patients it claims to serve. People with severely impairing attentional disorders face periodic shortages because manufacturing quotas have not kept pace with prescriptions written largely for milder presentations. People without disorders pay an unnecessary tax—in time, money, and the indignity of self-pathologizing—to access a substance many use no more recklessly than caffeine.
And here the two threads tie together. Because access requires a diagnosis, people learn to pathologize their own attention spans, casting ordinary distractibility, restlessness, or boredom as evidence of disease. The diagnostic label, once obtained, becomes part of the self-concept. The medication confirms it. As we argued in our recent piece on social media addiction, this is the same dynamic that increasingly converts ordinary human behaviors into clinical disorders whenever the institutional gradient runs in that direction. We are pathologizing the inattentive features of a mind that evolved for foraging, vigilance, and rapid switching between local cues—a mind whose default mode looks pathological mainly when held against the demands of an industrialized economy.
What Reform Would Look Like
Mass medication of a generation is not, on its face, a triumph of access. Roughly three in ten young Americans are now on a psychiatric drug at a time when, by most measures of material life, they have less to fear than any cohort in human history. Some benefit. Many others are taking medications for conditions whose definitions have expanded to include ordinary distress.
A more honest approach would treat competent adults as capable of making their own decisions about these drugs, without requiring government permission. Patients and clinicians can still rely on information and professional guidance, but the final choice should rest with the individual. At the same time, the system’s incentives should no longer reward turning everyday variations in mood, attention, and behavior into diagnoses.
Patients don’t need a system that nudges them toward a diagnosis just to get help—or one that makes it difficult to stop once they start. They need clear information, flexibility, and the freedom to make their own decisions. The current system offers none of those.












