Mammalian
Toxicology, Session 12
Discussion of
Draft Projects & Discussion Questions
It is probably wisest
to read the
material listed under Session 13 prior to next week. That
material will be relevant to the new
Discussion Questions listed and needs to be digested prior to the Final.
Final
Exam
Next
week, May 4, I will post a set of 10 point questions in conjunction
with the
lecture posting. These constitute
the first half of the final exam. You will be responsible for choosing
5 of
these to answer and submit before May 11. On
May 11 you will answer a series of 10,
5 point, questions that constitute the other half of the final. All
responses
will be in class or via the Prometheus, on-line, site.
Discussion
of Draft Projects
Please note that I
made the following
notations independent of comments made by students. They
may therefore be redundant or
contradictory to other critiques. Please
use them in conjunction with any other available commentaries as well
as your
own judgments to revise and improve the penultimate projects to make
them
suited to final submission and posting to the course Web site.
Group 1: ????
Group 2: ????
Discussion
Questions
Agencies
& Risk
3.
Modern toxicology includes the concepts of causality, risk assessment,
and site
specificity of action. Why
are all these concepts relevant to
the following agencies: USPHS, NSC, EPA, FDA, OSHA, NRC?
How many of these agencies derive
directly from concerns about toxicant exposures?
Most of you responded
by looking up
each of these agencies and noting their mission statements, all of
which relate
to public health and safety. The
National Institute of Environmental Health Sciences as a part of NIH
within
USPHS is notable in having a very high profile with respect to
toxicology and
toxicological studies. If you
missed that one, check it out. Also
look at the historical roots of the USPHS and the FDA. They
are a lot longer than you might
imagine but they did arise directly from concerns about toxicant
exposures. EPA arose from the Federal
Water Quality
Agency (also known as the Federal Water Pollution Control Agency) in
the early
1970's both of these entities were responses to the likes of Rachel
Carson. I worked as an analytical chemist
evaluating PCBs for FWQA briefly in 1970; my dissertation work was
supported in
part by EPA.
Food
Safety
11. What
steps are required
for drug product clearance by the FDA? How
are food supplements handled by the
same agency? Are there additional or alternative rules for genetically
engineered foods or drugs produced by genetically engineered organisms?
Do such
rules seem adequate to assure safety of humans using genetically
engineered
organisms or products? What about
environmental safety of these organisms or products?
The steps of testing
leading up to
clinical trials are enumerated by several people as are the lack of
such steps
for food supplements and genetically engineered foods. Note,
however, that genetically
engineered pharmaceuticals do go through that same series of tests. Food toxicology contains a lot of
contradictions, twists and turns because of the ubiquity of chemicals
and their
essentiality for the maintenance of life. Organisms
cannot live without food and
they cannot eat without taking in and absorbing compounds that have
undesired
effects. The idea of regulators is
to minimize, not eliminate, exposures to deleterious compounds for the
majority
of the susceptible population. That's
obviously tricky, especially when
legislation like the
Delaney clause exists (you need to know what that is, by the way). Do note the way food contents are
classified and subclassified for
regulatory purposes
and that some subclasses are regulated or require testing while others
do not.
With respect to
genetically engineered
foods, most of you felt they were not adequately regulated. How do you feel about new poultry breeds
or bean hybrids derived by traditional breeding methods? Note
that many of these will contain
mutations, proteins, or secondary metabolites that do not exist in the
parent
strains. Should meat or seeds from
these also be put through thorough toxicological testing? In other words, where do we draw the line
for what requires testing and what does not?
NOAEL,
LOAEL & Threshold Model
12. Are
the NOAEL and LOAEL
concepts most compatible with a threshold level or a no threshold level
version
of dose responses? How do they
differ from zero dose? Does
the last answer change for
environmental exposures if newer methods allow lower limits of
detection for
the toxicant in question?
A zero dose,
is a zero dose, that is, no toxicant given. That
may not be the same as absolutely no
exposure if the toxicant is ubiquitous. Under
those circumstances the zero dose would be
that tested group that had been given no
measurable toxicant above that found in the environment. In
all tested groups there is variability
of response as measured as a parameter on the y-axis of a dose response
curve. That variability means that there
will always
be some uncertainty about the response at zero dose
and there will always theoretically be a range of nonzero doses that
cannot be
distinguished by bioassay from zero because each of their ranges of
response
will overlap that seen for the zero dose. The
first dose that lies statistically
beyond that range of uncertainty is the LOAEL. The
last dose that lies within that range
is the NOAEL. If enough doses are
tested with enough animals there will always be a region of low doses
that
falls below the LOAEL and the data can be described by a threshold
model. If very few doses are tested, it
will be
difficult to see this region and the data would almost automatically
need to be
fitted to the no threshold model.
Note that what becomes
tricky is not
this conceptual formulation of the problem but its practical
application. I can dilute a dose serially
with a great
deal of confidence before administering it to animals. But
in testing, what is important is the
biologically available dose. And to
know what that is requires an independent assessment of the levels of
the
toxicant in the test animals. This
is where the issue of analytical sensitivity comes in. If
I cannot measure something in the
animal, then the response should be grouped with the zero dose group. And it
may well be that my measurement methods do not allow me to "see" the
toxicant in the test organism until I reach beyond the statistical line
separating NOAEL and LOAEL. In that
case, the data will only fit a no threshold model.
Repair
Functions & Toxicity
14. How
do repair functions
complicate the analysis of toxicity in acute exposure models? What about chronic exposure models? Do
they have the same impact on results in adults as opposed to developing
animal
models?
Repair functions take
time so they are
most important for chronic models. But
"acute" in toxicology also does not mean only a few minutes. It may well be that some repair functions
may take place within the timeframe of even acute tests so that they
cause an
increase in the NOAEL or threshold for toxicity. They
would have similar impacts on
chronic models, and, indeed, may make some toxicants appear benign in
such
systems.
As to which systems
would be most
sensitive to toxicant insult, the general rule is that developing,
aging, and immunocompromised individuals
will demonstrate the least
effective repair functions and therefore be susceptible to the lowest
doses of
toxicants. Obviously there will be
exceptions, especially where, for instance the ultimate target of a
particular
toxicant does not yet exist (or no longer exists) at the stage of
toxicant
exposure or where physiological mechanisms, e.g.,
fetal hemoglobin affinity for oxygen, allows functioning in the
presence of
toxicant that would not normally occur in the young adult/adult.
Toxicology
Information
15.
Search out the following
sites and explore them: HazDat, EXTOXNET,
RTECS, Toxline, IRIS, IARC. What
information do they contain? Do they
appear up to date? Print out some examples
of the contents
and see if you can interpret them. To
what area(s) of toxicology are each of them
relevant?
In former years many
of the answers did
a nice job of looking these up and actually exploring their contents. Others did not tell us whether the
databases were up to date, nor did they tell us anything about
examination of
the contents of the databases.
Temporal
Changes in Toxicity
18. Given
an increased
knowledge of the metabolic pathways utilized by mammals to process
toxicants,
how would you now design an experiment in which two neuroactive
drugs were to be used repeatedly to test particular brain circuits? Would any of this make any difference to
your interpretation of results in which there was an apparent decline
with time
in the responsiveness to one of the drugs? Does
this knowledge make you view the use
of multiple drugs in elderly patients any differently?
Neurotransmitters and
drugs that impact
their actions, like many hormones in the periphery, have a habit of
altering
their target tissues. They often
cause a decrease in tissue sensitivity to the same compound in the
short term. For milliseconds to seconds
this is due
to depolarization/ repolarization, the
neural refractory
period. For
minutes to hours this may involve receptor down
regulation
in which the transmitter-receptor complexes are
internalized by the target cell and decoupled and/or degraded. For minutes to days it may also involve receptor
desensitization which is
usually associated with alteration of the intracellular signaling
pathways so
as to make them unresponsive to a like stimulus until new conditions
appear, or
new signal proteins are synthesized. Obesity
related diabetes is a good model
that demonstrates how continuous stimulation with a hormonal signal,
insulin,
leads to down regulation/desensitization of its own receptors,
generating a
refractive, unresponsive or insensitive state in insulin target
tissues. Opiate abuse produces a similar
state in
the CNS that leads addicts toward an upward spiral of dosage needed to
attain a
given state of intoxication.
There are also cases, e.g., estrogen sensitive tissues, in
which a stimulus up-regulates it's
own receptors making the
target tissues more sensitive to a second dose of
hormone/neurotransmitter or
agonist. Likewise a lack of regular
stimulation, e.g., limb denervation,
can also lead
to a state of hypersensitivity marked by the presence of increased
numbers of neuroreceptors at the cell
surface. Blockers of presynaptic
release often generate hypersensitivity states in the post-synaptic
neurons.
And there are also
instances where
treatment with one hormone or neurotransmitter up-regulates the
response to a
second hormone or neurotransmitter, e.g.,
progesterone primes endometrium to the
actions of
estrogen, estrogen primes gonadotropes to
the actions
of LHRH.
At
the level
of the organism, one drug can often modify the activation or
deactivation of a
second by means of altering the activities of phase I or II enzymes in
the
liver or the activities of enzymes such as the MDRs
or OATs systems in the kidney.
So the acts of one neuroactive drug can
impact a
second neuroactive drug in a variety of
ways that may
be antagonistic, agonistic, competitive, additive, or synergistic.
How to test two such drugs? Each
would need to be tested by itself in a design that would include
multiple doses
administered at fixed intervals to each animal/subject. Interaction
groups would need to be used
to explore the impacts of which drug preceded which and how any unique
toxicities or actions of each drug were impacted by the other given
simultaneously or before or after the other. The
affected brain circuits would
obviously of most interest, but some attempts would be needed to
explore
hepatic functions (formation of metabolites in blood or urine) and the
comparative absorption and concentration of the drugs in the brain
areas of
interest.
The elderly have declines in hepatic functions, and often suffer from
compromised neural function. Drugs
may well have a prolonged half-life in circulation. If
the parent drug is active, this would
prolong its actions. If a
metabolite is active, it might not reach effective concentrations until
much
later than in a younger individual, or, indeed, may not ever reach an
effective
level. The elderly may not be able
to tolerate a synergistic action of a second drug in the face of one
acting on
the same or interconnected neural tracts. Diana's comments they are
particularly pertinent here.
DNA Size
19. Look
up the dimensions
for 1 base pair of DNA and calculate the length of 1 complete copy of
cellular
DNA (3 x 109 base pairs). Compute the volume of a typical
cell
nucleus, about 10 um (10 x 10-6 m) in diameter. How often does the DNA have to be folded
to fit in that volume? Given the
number of folds, how many folds are there in a gene or set of genes
like the
immunoglobulin heavy chains that might occupy 1% of the genome length
in the germline but only 0.5% in the
mature lymphocytes? Comment on how this
might have an impact
on the frequency of development of autoimmune disease if a toxicant
interferes
with normal DNA repair enzymes.
The point is that DNA
is so long (about
1 m, or 2 m if you consider that approximately half, the
heterochromatic
portion, of total DNA has not yet been sequenced, only the euchromatic
portion has) it has to be folded many, many times to fit inside the
nucleus
(the computation should use the length of the DNA versus the maximum
diameter
of the nucleus) and that even a small stretch of DNA still contains
many, many
folds. That folding means the DNA
has to be opened up to allow transcription or replication. Somatic
rearrangement
eliminates a sizeable chunk of DNA and requires double-stranded repair
of that
area, a type of repair particularly prone to mistakes. Not
only does activation of a B or T cell
lead to somatic rearrangement, it leads to active clonal
replication, that is, lots of mitotic division. And
mitosis means more DNA cutting and
pasting along with a surveillance on DNA
repair that
can be imperfect. If imperfections
of DNA arise as a result that are not eliminated by apoptotic
mechanisms in the
thymus, clones of these cells can persist. If these clones express
anti-self immunoglobulins the result is an
autoimmune disease.
Gut
Physiology
20. The
gut physiology of
various mammals differs in ways that make oral intoxication of one
species potentially
very different from oral intoxication in another species. Look up the comparative gut physiology
for as many mammalian species as possible. Note differences in the
sizes of the
stomach, the length and nature of the small intestine, the presence or
absence
of a caecum and its size, the length and
size of the
large intestine and colon. How
might pure glucose ingestion affect a ruminant? A hindgut fermenter like a
horse? Are there key differences
among species
that will define which toxicants will be more or less potent or
efficacious in
going from one species to another? Are
the test species (rat, dog, mouse, rhesus) most commonly being used
good models
for humans? Or
for other, wild species?
The point here is that
gut physiology
differs markedly across species. As
a result, considerations of pH sensitivity, impacts of gut microbes,
and enterohepatic metabolism vary greatly
across taxa and species. The
relative sizes and anatomical
relationships of the segments of the digestive tract have significant
impacts
on toxicant absorption and elimination. A
toxin in one species may well not be a
toxin in another species because of these variations. And
dose-response relationships may
differ widely among animals. Too
much glucose in a cow drives the forestomach
toward
acidification and generates so much carbon dioxide the animal may
suffer
gastric rupture due to bloat. In a
horse the glucose also causes problems by altering hindgut
fermentation. Too rich a diet in a horse
will actually
cause starvation from calorie malnutrition (normally derived from
partial
digestion of cellulose in the hingdut). Toxicants may have equally diverse
impacts. So a test species that
differs greatly from the species of interest with respect to gut
physiology may
well be inadequate to provide good predictive data for the species of
interest.
Macromolecular
Chemistry
21.
What are the chemical structures of cellulose, hemicellulose,
pectin, glycogen, lactose, and sucrose? Does
this list contain
the principle forms of dietary insoluble fiber or are there others
besides the
silicates and minerals, that contribute?
The structures appear
in various sources.
The importance of
insoluble and soluble fiber are also covered in those locations.
Insoluble fiber acts to stimulate peristalsis and the movement of
materials
through the gut. It also acts as an
adsorptive surface to carry materials through the digestive tract and
counters
absorption from digesta by the gut. Soluble fiber tends to slow things down
and allow water and mineral resorption
which is
accompanied by increased toxicant and metabolite resorption.
Cellulose, lignin, and possibly hemicellulose fit into the insoluble category
along with
silicon dioxide (sand) and some minerals. Pectin,
gums, glycogen, and hemicellulose all fit
in the soluble group along with
additional minerals. Lactose and
sucrose are disaccharides (two simple sugars linked together) and do
not
contribute to either of these classes of fiber.
Do note here, again,
that what begins
as insoluble fiber in some species like ruminants does not end that
way. So what can be classes as fiber is
physiologically handled differently across species and therefore may
have
differing impacts on toxicant exposures depending on what species is in
question.
Portal Site
Active Agent
23. In
the compartmental
models of pharmacokinetics much emphasis is placed on distributional
volume and
the role of toxicant sinks and sources within tissues. Route
of exposure plays an important role
as does mode of elimination. What
happens to these models and
considerations if the agent is active within the tissues where it is
introduced? For example, how do we
deal with a compound that acts on the lung if it is introduced as an
aerosol?
I refer you to the
coverage in C&D
on hepatic and lung, especially in chapters 5 (pp 116-117), 7, 13, and
15 (15
is not required reading). My point
with this question is that if a toxicant acts in a deleterious manner
on a
tissue that is important to its absorption or elimination it will very
clearly
alter the observed pharmacokinetics. If
the compound killed the liver cells
that were involved in detoxifying it, the circulating half-life of the
compound
would be prolonged. If a toxicant
required an active transport process to be absorbed but acted to kill
the cells
involved, it would limit the absorption of the compound. Note
that in both these instances doses
that were sublethal would display
pharmacokinetics
that are different than what happens at
higher,
directly toxic, doses.
Consideration of an
aerosol delivery in
the lung requires knowledge of the size of the aerosol particles
because larger
aerosols are eliminated higher in the alveolar tree. Large
aerosols act principally in the gut
because they are moved to the digestive system in the nasopharyngeal
area. Those of intermediate size (2-5 um)
are
transported in a retrograde manner back to the nasopharynx
from the bronchiolar tree. Those
less than 1 um enter the alveolar gas exchange areas where absorption
into the
blood occurs directly.
Damage high in the
respiratory tree may
well allow access of the toxicant to the bloodstream but would do so
more
slowly (as it requires the toxic insult first) than an aerosol acting
within
the alveoli. Indeed, a toxicant
acting to scar the alveoli might limit the uptake of the compound. More acutely, however, it might simply
increase the speed of toxicant absorption unless it compromised the
alveolar
capillaries during its actions.
Carcinogens
and Risk Assessment
25. The
statement is made in
Klasssen & Watkins III, Companion
Handbook for Casarett & Doull's Toxicology, 5th Ed. (p 189) that:
"For
risk analysis, it is assumed that cancer induction differs from all
other
toxicological events in that the induction of cancer is a nonthreshold
phenomenon or an accumulation of many such irreversible events." Is this justified mechanistically?
Obviously, based on an
earlier response
from an EPA employee, the EPA wonders about the same question. Repair mechanisms may well mean the
statement made is technically false. From
a practical, regulatory point of
view, which is, by its nature, conservative, the statement is an
attempt to
optimize risk assessments. Risk
analysis works from data, not just biologically based models. In dealing with cancer, it is dealing
with information that is often qualitative. Yes,
there is a cancer; no, there is not
a cancer with "yes" and "no" defined by agreed upon
appearance of cells on histological slides or growth of abnormal masses
on
tissues. Biological tissue will
demonstrate a finite background of such abnormalities due to its innate
growth
properties or to exposure to agents that are not being controlled in a
particular toxicological study. So,
from a theoretical viewpoint, there will normally be variation in the
data for
zero dose as well as every other dose
tested. Given that situation, there should
again
be a region of low dose where the biological response is not
statistically
different from that seen with the zero dose. Analytically,
we may or may not be able
to measure residues that differ from zero within these animals,
especially when
we consider that there may be a very long latency between exposure and
development of tumors. So in this
situation, we may be forced to look only at dose administered versus
tumor
formation. The time lapse and
numbers of animals that may need to be examined to distinguish
background from
a LOAEL, however, may preclude accurate establishment of an NOAEL. Here again we would be forced to use a no
threshold model.
Risk assessment also
is usually looking
beyond the induction stage of cancer formation. Tumors
are "fixed" and forming clonal expansions
of the original transformed cell by the
time they are detected by gross inspection and are usually at that
stage when
demonstrated histologically. Though they need not be to the metastatic stage, they are usually beyond the
point that
repair processes can cause them to spontaneously involute.
Given these
conditions, it is probably
acceptable for risk analyses to make the simplifying assumptions stated.
Genetic
Toxicology & Evolution
26. If pyrimidine
dimers and chemical adducts to DNA are
preferentially
removed in transcribed and active DNA sequences relative to untranscribed
and inactive DNA sequences, what does that suggest about the hot spots
for
genetic drift upon which evolution is based? If
you were looking for gene sequences to
use to differentiate among related species, what kind of genes would
you tend
to look at as indicators?
If
repair
concentrates on actively transcribed regions (presumably because these
are the
most important for maintenance of life), then the untranscribed
regions should be the ones that accumulate the most mutations. Indeed the whole idea that
the Y chromosome is shrinking
over evolutionary time is based on the evidence that it contains
relatively few
genes, most of which do not seem to be central to most of metabolism or
cellular physiology. While existing
DNA sequences for many genes in many species allow comparisons using a
large
number of proteins, we might expect the most marked differences to
exist
between proteins that are relatively rarely transcribed. So
hemoglobins
or cytochromes are probably not the genes
of choice. Rather protein markers of
extreme age or
crystalline proteins of the lens of the eye might serve as better
choices.
Genetic
Toxicity: Possible Models?
27. Why
might the Plains Viscacha of
Most people located
our chinchilla
relative and figured out that the female produces a remarkable number
of eggs
at each ovulation. Indeed, it is
not clear that the viscachia shares the
same meiotic
arrest or attretic processes that occur in
most other
mammals. That she can only
successfully carry twins is quite interesting in itself.
But she might well be a great model
for examining preovulatory toxicant
insults. Post-fertilization might also
work well
at least until the time of implantation after which the viscachia's
own physiology would probably complicate the toxicant evaluation
process. Note that there may be practical
reasons
why this species has not cropped up in the toxicological literature:
the
gestation is about as long as a small ruminant or monkey, the animal is
not
very small 2-8 kg, and is social, living in groups. Housing
groups of these animals would be
expensive.
DAZ1 is so frequently
deleted in human
male offspring that it is unclear if these de
novo deletions occur only in genetically susceptible families or if
they
exist in every male's spermatogenic
products. Since sperm can be evaluated for
genetic
mutations by lysis of the sperm heads and in situ hybridization using fluorescent
DNA probes, it should be possible to screen semen specimens for DAZ1
deletions
in combination with Y chromosome specific DNA markers. If
germ cell toxicity would be evident as
an increase above baseline in those Y chromosome bearing sperm that
lack the
DAZ1 marker.
DAZ1 is conserved
across primates
making it a good tag for the insults suggested. However,
it is also present as a six-fold
tandem repeat which means that up to five copies could be deleted
without
losing a positive in situ
hybridization signal for the locus. Thus,
to use the marker optimally, some
kind of intensity index would be needed that could demonstrate how many
copies
of DAZ1 existed in a given sperm.
Alternatively,
quantitative PCR methods
could also answer the same questions.
Cross
System Enzyme Induction
29. A
number of endogenous
compounds as well as xenobiotics cause
induction of
enzymes that apparently have no role in their own metabolism. Why might this make sense evolutionarily?
Or, why might it simply be a relic
of evolution within these metabolic systems? Provide
specific examples, if possible,
to defend your argument (or demolish someone else's)
Induction
of
phase I and phase II enzymes are the classic examples here. The plethora of CYP
genes, however, probably arise by repeated gene duplication,
translocation, and divergence. It
may well be that control elements that were important for metabolizing
one
class of compound remained with a divergent enzymatic gene making the
control
an evolutionary relic. On the other
hand, coordinate control of diverse metabolic pathways, e.g.,
electron transport systems to produce ATP and sodium pumps in
the cell membrane that use ATP, can often be found. Since
many of the CYP enzymes use NADP(H) as a
cofactor for activity, coordinate regulation of
their activities makes metabolic sense. The
involvement of amino acid metabolism
and glutathione production and use also make coordination between
glutathione
use and production a probable arena of coordination.
Assignments
for Session 3: Calcium Controls
2.
Describe the control
circuit for parathyroid hormone, calcitonin,
and cholecalciferol; how would a blocker
of 1[alpha]-hydroxylase impact calcium
deposition in bone?
This is simply a
review of calcium and
bone metabolism. I refer you to any
endocrinology, physiology, or cell biology textbook. The pertinent
slides exist
near the end of the list on my Endocrinology course website. Parathyroid hormone (PTH) and calcitriol (1,25-dihydroxy-cholecalciferol;
1,25-dihydroxy-vitamin D3) counter the actions of calcitonin (CT). PTH
and calcitriol
cause serum calcium levels to rise by increasing calcium uptake from
intestine
and bone; CT lowers serum calcium levels by stimulating calcium
clearance
through the kidney and blocking reuptake from bone. Further,
calcitriol
is formed by the sequential actions of UV light on 7-dehydrocholesterol
passing
through exposed skin, liver 25-hydroxylase on cholecalciferol,
and 1-hydroxylase in kidney on 25-hydroxy-cholecalciferol. The 1-hydroxylase is stimulated by PTH
while a competing, inactivation pathway
involving
24-hydroxylase is stimulated by CT.
Note
that in
the human most of the control is exerted via the PTH and calcitriol
arm of the controls with CT playing a normally minor role.
A 1-hydroxylase
blocker would tend to
block bone resorption and therefore
promote calcium
deposition in bone.
Discussion
Questions
PCBs
& Thyroid Function (QS2Q3)
13. Exposures to PCBs can potentially alter thyroid function in part
because of
the structural similarities of some of the PCB isomers and thyroxine
or triiodothyronine. About
50% of thyroxine
is carried in circulation by thyroid binding globulin, TBG, while 45+%
is
carried by albumin or transthyretin. Much of thyroxine
enters cells by diffusion, but a substantial quantity also enters via
the
aromatic amino acid transport proteins in cell membranes. After cell entry, thyroxine
is deiiodinated to triiodothyronine
which then binds to receptors that usually reside on DNA binding sites
within
the nucleus; binding often results in release of the hormone receptor
complex
from DNA and in a change in DNA structure and transcriptional activity.
What are the obvious possible targets for
PCB toxic effects? If we include
molecular clearance pathways, are there other targets? What
if the toxicant is a slightly acidic
derivative of a PCB? What about an
amine derivative of a PCB?
© 2005 Kenneth L. Campbell