First recorded use of a traditional tombstone?

First recorded use of a traditional tombstone?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The question is rather straight-forward. I consider a "traditional" tombstone as some sort of monument having an inscribed name and date of birth/death or age.


Ancient Egypt is thought to be the origination of the funeral tombstone, however the original name was Stele/Stela, and in plural stelae, comes from the latin, to stand. Stelae have been documented back to the first dynasty - 2890 B.C., they were used later by many number of other cultures. The highest recorded use is by Attica in Greece, example would be the Greece marble funerary stelae. The Roman uses of stele came from their expansion into Greece, in which they adopted some of the culture.

Funeral Stele - Stele - Brooklyn Museum Heku - Brooklyn Museum Stele


There are almost as many different characteristics of vampires as there are vampire legends. But the main characteristic of vampires (or vampyres) is they drink human blood. They typically drain their victim’s blood using their sharp fangs, killing them and turning them into vampires.

In general, vampires hunt at night since sunlight weakens their powers. Some may have the ability to morph into a bat or a wolf. Vampires have super strength and often have a hypnotic, sensual effect on their victims. They can’t see their image in a mirror and cast no shadows.


A History of Cosmetics from Ancient Times

Civilizations have used cosmetics – though not always recognizable compared to today’s advanced products – for centuries in religious rituals, to enhance beauty, and to promote good health. Cosmetics usage throughout history can be indicative of a civilization’s practical concerns, such as protection from the sun, indication of class, or conventions of beauty. The timeline below represents a brief history of cosmetics, beginning with the Ancient Egyptians in 10,000 BCE through modern developments in the United States. You can use the following navigation to jump to specific points in time.

Cosmetics in the Ancient World

10,000 BCE:
Cosmetics are an integral part of Egyptian hygiene and health. Men and women in Egypt use scented oils and ointments to clean and soften their skin and mask body odor. Oils and creams are used for protection against the hot Egyptian sun and dry winds. Myrrh, thyme, marjoram, chamomile, lavender, lily, peppermint, rosemary, cedar, rose, aloe, olive oil, sesame oil, and almond oil provide the basic ingredients of most perfumes Egyptians use in religious rituals.

4000 BCE:
Egyptian women apply galena mesdemet (made of copper and lead ore) and malachite (bright green paste of copper minerals) to their faces for color and definition. They use kohl (a combination of burnt almonds, oxidized copper, different colored coppers ores, lead, ash, and ochre) to adorn the eyes in an almond shape. Women carry cosmetics to parties in makeup boxes and keep them under their chairs.

3000 BCE:
The Chinese stain their fingernails with gum arabic, gelatin, beeswax, and egg. The colors are used as a representation of social class: Chou dynasty royals wear gold and silver, with subsequent royals wearing black or red. Lower classes are forbidden to wear bright colors on their nails.

Grecian women paint their faces with white lead and apply crushed mulberries as rouge. The application of fake eyebrows, often made of oxen hair, is also fashionable.

1500 BCE:
Chinese and Japanese citizens commonly use rice powder to make their faces white. Eyebrows are shaved off, teeth are painted gold or black, and henna dyes are applied to stain hair and faces.

1000 BCE:
Grecians whiten their complexion with chalk or lead face powder and fashion crude lipstick out of ochre clays laced with red iron.

Cosmetics in the Early Common Era (CE)

100:
In Rome, people put barley flour and butter on their pimples and sheep fat and blood on their fingernails for polish. In addition, mud baths come into vogue, and some Roman men dye their hair blonde.

300-400:
Henna is used in India both as a hair dye and in mehndi, an art form in which complex designs are painted on the hands and feet using a paste made from the henna plant, especially before a Hindu wedding. Henna is also used in some North African cultures.

Cosmetics in the Middle Ages

1200:
Perfumes are first imported to Europe from the Middle East as a result of the Crusades.

1300:
In Elizabethan England, dyed red hair comes into fashion. Society women wear egg whites over their faces to create the appearance of a paler complexion. Some people believe, however, that cosmetics blocked proper circulation and therefore pose a health threat.

Renaissance Cosmetics

1400-1500:
Italy and France emerge as the main centers of cosmetics manufacturing in Europe, and only the aristocracy has access. Arsenic is sometimes used in face powder instead of lead. The modern notion of complex scent-making evolves in France. Early fragrances are amalgams of naturally occurring ingredients. Later, chemical processes for combining and testing scents surpass their arduous and labor-intensive predecessors.

1500-1600:
European women often attempt to lighten their skin using a variety of products, including white lead paint. Queen Elizabeth I of England is one well-known user of white lead, with which she creates a look known as “the Mask of Youth.” Blonde hair rises in popularity as it is considered angelic. Mixtures of black sulfur, alum, and honey are painted onto the hair and lighten with sun exposure.

19th and Early 20th Century Global Cosmetics Developments

1800:
Zinc oxide becomes widely used as a facial powder, replacing the previously used deadly mixtures of lead and copper. One such mixture, Ceruse, which is made from white lead, is later discovered to be toxic and blamed for health problems including facial tremors, muscle paralysis, and even death.

Queen Victoria publicly declares makeup improper. It is viewed as vulgar and acceptable only for use by actors.

1900:
In Edwardian Society, pressure increases on middle-aged women to appear youthful while acting as hostesses. As a result, cosmetics use increases, but is not yet completely popularized.

Beauty salons rise in popularity, though patronage of such salons is not widely accepted. Because many women do not wish to publicly admit they have assistance achieving their youthful appearances, they often enter salons through the back door.

From its earliest days, the United States has been at the forefront of cosmetics innovation, entrepreneurship, and regulation. The timeline below represents a brief history of the important developments and American usage trends, as well as a regulatory history of cosmetics in the U.S.

Growth of the Industry

1886:
David McConnell founds the California Perfume Company (CPC), then located in New York. Over time, the company continues to grow and experiences great success, selling five million units in North America during World War I alone. In 1928, CPC sells its first products – toothbrush, powdered cleanser, and a vanity set – under the name by which it is commonly known today: Avon. The Avon line of cosmetics was introduced the next year, in 1929.

1894:
The extremely competitive nature of the industry drives a group led by New York perfumer Henry Dalley to found the Manufacturing Perfumers’ Association. The group evolved over time and, after several name changes, is now known as the Personal Care Products Council (PCPC).

1900:
The number of U.S. firms manufacturing perfumery and toilet goods increases from 67 (in 1880) to 262. By 1900, cosmetics are in widespread use around the world, including the United States.

1907:
Eugene Schueller, a young French chemist, invents modern synthetic hair dye which he calls “Oréal.” In 1909, Schueller names his company Societe Francaise de Teintures Inoffensives pour Cheveux (Safe Hair Dye Company of France), which today has become L’Oréal.

1910:
American women begin to fashion their own form of mascara by applying beads of wax to their eyelashes.

World War I & Aftermath

1914:
The onset of World War I leads to increased employment among American women. This gain in disposable income, with more discretion over its use, leads to a boom in domestic cosmetics sales.

1915:
Chemist T.L. Williams creates Maybelline Mascara for his sister, Mabel, the product’s inspiration.

1919:
Congress passes the 18th Amendment to the U.S. Constitution, commonly known as Prohibition. As originally drafted, the Amendment might have outlawed perfumes and toilet goods because of their alcohol content. The Manufacturing Perfumers’ Association (MPA), however, mobilized its forces and persuaded Congress to clarify the language to exempt products unfit for use as beverages.

The Roaring 20s

1920:
The flapper look comes into fashion for the first time and, with it, increased cosmetics use: dark eyes, red lipstick, red nail polish, and the suntan, which is first noted as a fashion statement by Coco Chanel.

Cosmetics and fragrances are manufactured and mass marketed in America for the first time.

Max Factor, a Polish-American cosmetician and former cosmetics expert for the Russian royal family, invents the word “makeup” and introduces Society Makeup to the general public, enabling women to emulate the looks of their favorite movie stars.

1920-1930:
The first liquid nail polish, several forms of modern base, powdery blushes, and the powder compact are introduced.

1922:
The Manufacturing Perfumers’ Association (MPA) changes its name to the American Manufacturers of Toilet Articles (AMTA).

1928:
Max Factor, now living in Hollywood, unveils the very first lip-gloss.

1929:
A pound of face powder was sold annually for every woman in the U.S. and there were more than 1,500 face creams on the market. The concept of color harmony in makeup was introduced simultaneously, and major cosmetics companies began producing integrated lines of lipsticks, fingernail lacquers, and foundations.

The Great Depression

1930:
Due to the influence of movie stars, the Hollywood “tan” look emerges and adds to the desire for tanned skin, first made popular by fashion designer Coco Chanel, who accidentally got sunburnt visiting the French Riviera in 1923. When she arrived home, her fans apparently liked the look and started to adopt darker skin tones themselves.

1932:
In the midst of the Great Depression, brothers Charles and Joseph Revson, along with chemist Charles Lachman, found Revlon, after discovering a unique manufacturing process for nail enamel, using pigments instead of dyes. This innovation was ultimately responsible for Revlon’s success it became a multimillion dollar corporation within just six years. Revlon also borrowed the concept of “planned obsolescence” from General Motors Corp. to introduce seasonal color changes. Until World War II, women tended to use an entire lipstick or bottle of nail polish before purchasing a new one.

1934:
Drene, the first detergent-based shampoo, is introduced into the marketplace by Procter & Gamble.

1935:
Max Factor develops and introduces pancake makeup to meet the unique requirements of Technicolor film. When actresses started taking it home for personal use, he realized his new invention looked wonderful both on and off camera and decided to introduce pancake makeup to the general retail trade.

1936:
Eugene Schueller (founder of L’Oréal) invents the first sunscreen. Despite its relative ineffectiveness, this development leads to the invention of Glacier Cream by Austrian scientist, Franz Greiter. Introduced in 1938, this product is cited as the first commercially viable sun protection cream. In 1962, Greiter introduced the concept for the Sun Protection Factor rating system (SPF), which has since become the worldwide standard for measuring the effectiveness of sunscreen.

1938:
Cosmetics were excluded from the Pure Food & Drug Act of 1906 because they were not considered a serious public health concern. However, an incident linked to use of an eyeliner product forced Congress to pass the Federal Food, Drug, and Cosmetic (FD&C) Act, which greatly expanded FDA’s authority to regulate cosmetics.

World War II & Aftermath

1940:
Leg makeup is developed in response to a shortage of stockings during World War II.

The FDA is transferred from the Department of Agriculture to the Federal Security Agency and Walter G. Campbell is appointed the first Commissioner of Food and Drugs.

1949:
Companies such as Procter & Gamble (who made products such as soap and laundry detergents) begin to sponsor daytime television programs that will eventually be called “soap operas,” the first of which was called These Are My Children.

The Modern Era of Cosmetics

1950:
The Modern Era of the cosmetics business begins as television advertising is first implemented in earnest.

1952:
Mum, the first company to commercially market deodorant, launches the first roll-on deodorant (under the brand name of Ban Roll-On), which is inspired by the design of another recently invented product – the ballpoint pen.

1955:
Crest, the first toothpaste with fluoride clinically proven to fight cavities, is introduced by Procter & Gamble.

1960:
Congress passes the Color Additive Amendments, in response to an outbreak of illnesses in children caused by an orange Halloween candy, which requires manufacturers to establish the safety of color additives in foods, drugs, and cosmetics. The Amendments included a provision called the “Delaney Clause’" that prohibited the use of color additives shown to be a human or animal carcinogen.

“Natural” products based on botanical ingredients, such as carrot juice and watermelon extract, were first introduced. False eyelashes became popular.

1965:
The first aerosol deodorant is introduced – Gillette’s Right Guard.

1966:
Congress enacts the Fair Packaging and Labeling Act (FPLA), which requires all consumer products in interstate commerce to be honestly and informatively labeled, with FDA enforcing provisions on foods, drugs, cosmetics, and medical devices.

1970:
The Toilet Goods Association (TGA) changes its name to the Cosmetic, Toiletry, and Fragrance Association (CTFA).

1971:
In response to a citizen petition filed by the CTFA, the FDA Office of Colors and Cosmetics established the Voluntary Cosmetic Reporting Program (VCRP) in 1971. The VCRP is an FDA post-market reporting system for use by manufacturers, packers, and distributors of cosmetic products that are in commercial distribution in the United States it demonstrated the industry’s commitment to cosmetic safety and furthered the safety evaluation of cosmetic ingredients.

1973:
CTFA establishes the International Cosmetic Ingredient Nomenclature Committee (INC) – comprised of dedicated scientists from industry, academia, regulatory authorities and sister trade associations – to develop and assign uniform names for cosmetic ingredients. “INCI” names are uniform, systematic names internationally recognized to identify cosmetics ingredients that are published biennially in the International Cosmetic Ingredient Dictionary and Handbook.

The environmental movement brings challenges to the cosmetics and fragrance industry. The use of some popular ingredients, including musk and ambergris, is banned following the enactment of endangered species protection legislation.

1976:
CTFA, with the support of the FDA and the Consumer Federation of America, establishes the Cosmetic Ingredient Review (CIR) Expert Panel. The goal of the CIR is to bring together worldwide published and unpublished data on the safety of cosmetics ingredients, and for an independent expert panel to subsequently review that data. The seven-member panel consists of scientists and physicians from the fields of dermatology, pharmacology, chemistry, and toxicology selected by a steering committee and publicly nominated by government agencies, industry, and consumers. The panel thoroughly reviews and assesses the safety of ingredients and ultimately publishes the final results in the peer-reviewed International Journal of Toxicology. Today, CIR has reviewed thousands of the most commonly used cosmetics ingredients.

1980:
The 80’s saw a dramatic change from previous decades where women typically wore makeup that was natural and light. Instead, the new order of the day was to experiment with heavy layers of bold, bright colors. Gone was the golden glow of the 70’s, replaced by foundation that was one or two shades lighter than women’s natural skin tone. Smokey eyes in bright colors such as fuchsia, electric blue, orange, and green were hugely popular. The 80’s was all about taking your look to the extreme, championed by superstars such as Madonna and Cyndi Lauper.

Concerns about contaminated makeup emerged late in the decade. An FDA report in 1989 found that more than five percent of cosmetics samples collected from department store counters were contaminated with mold, fungi, and pathogenic organisms.

1981:
PCPC donates $1 million to fund a national center for the development of alternatives to animal testing – the Johns Hopkins School Center for Alternatives to Animal Testing (CAAT). Its mission is to promote and support research in animal testing alternatives. To date, CAAT has funded to approximately 300 grants totaling more than $6 million.

1989:
Look Good Feel Better is founded by the Look Good Feel Better Foundation (formerly the Personal Care Products Council Foundation) – a charitable organization established by CTFA to help hundreds of thousands of women with cancer by improving their self-esteem and confidence through lessons on skin and nail care, cosmetics, and accessories to address the appearance-related side effects of treatment.

1990:
Animal testing for cosmetics continues to be a hot topic in the beauty industry, driven by consumer preferences. In June 1989, Avon became the first major cosmetics company in the world to announce a permanent end to animal testing of its products, including testing done in outside laboratories. Other companies subsequently follow suit throughout the next decade and efforts intensify to develop and gain governmental approvals for alternative methods to substantiate product safety.

1999:
The first ever Cosmetics Harmonization and International Cooperation (CHIC) meeting is held in Brussels, Belgium. At the conference, representatives from the U.S. FDA the Japanese Ministry of Health, Labour, and Welfare (MHLW) Health Canada and Directorate General III of the European Union discuss broad cosmetics topics, including: basic safety substantiation, exchange of data and information, development of an international alert system, and an international memorandum of cooperation.

2000:
Consumers in the early 2000s are pressed for time. As the pace of work and home life became more stressful and hectic, cosmetics and personal care products that emphasized relaxation, but which could still be used quickly, constituted a strong category within the industry. Among these products are aromatherapy scented body washes, as well as other liquid and gel soaps, which start to replace traditional bar soaps.

The industry experiences increased challenges including product safety concerns, calls for scientific data to document product claims, increasing environmental concerns, and pressure from the growing animal rights movement. Congress began investigating possible revisions to the traditional “drug” and “cosmetic” definitions established under the Food, Drug, and Cosmetic Act.

2004:
The European Union (EU) implements an animal testing ban on finished cosmetics products.

2006:
The CTFA develops the Consumer Commitment Code, which highlights the voluntary, proactive, and responsible approach to product safety supported by cosmetics companies. The Code is intended to enhance confidence and transparency for consumers and government regulators.

2007:
The Cosmetic, Toiletry, and Fragrance Association (CTFA) changes its name to the Personal Care Products Council (PCPC). PCPC supports numerous legislative initiatives in the states of California, Massachusetts and New York, and launches Cosmeticsinfo.org to assist consumers in understanding the products they use as well as the industry’s record of safety in the formulation of those products.

The International Cooperation on Cosmetics Regulation (ICCR) is established, comprised of a voluntary, international group of cosmetics regulatory authorities from Brazil, Canada, the European Union, Japan, and the United States. This group of regulatory authorities meets on an annual basis to discuss common issues on cosmetics safety and regulation.

2009:
The European Commission (EC) issues regulation governing product claims, protecting consumers from misleading claims concerning efficacy and other characteristics of cosmetic products.

2010:
PCPC commissions a study to help quantify the important contributions the cosmetics industry makes to the economy and society. The findings illustrate the deep commitment of personal care leaders to promote and advance environmental, social, and economic benefits to its consumers.

2012:
PCPC begins working with FDA and Congressional staff on a multi-year process to develop a framework for cosmetics reform legislation that would strengthen FDA oversight and provide for national uniformity and preemption of disparate state cosmetic regulations.

2015:
Due to rising concerns about the potential environmental impacts, the cosmetics industry supports the enactment of the Microbead-Free Waters Act, which prohibits the manufacture and sale of rinse-off cosmetics (including toothpaste) that contain intentionally-added plastic microbeads.

2016:
PCPC successfully petitions FDA to issue draft guidance for lead impurities in lip products and externally applied cosmetics, providing critical regulatory certainty consistent with international policies.

PCPC issues an updated Economic and Social Contributions Report, documenting the vital role the industry plays in every state.

2017:
CIR completes the scientific safety assessments of 5,278 ingredients since the program began. Findings continue to be published in International Journal of Toxicology.

Recognizing that sunscreens are considered “drugs” and therefore banned in schools, PCPC successfully spearheads a coalition of more than 30 stakeholders in support of state legislation that allows students to have and apply sunscreen at school.


The origins of marriage

How old is the institution? The best available evidence suggests that it's about 4,350 years old. For thousands of years before that, most anthropologists believe, families consisted of loosely organized groups of as many as 30 people, with several male leaders, multiple women shared by them, and children. As hunter-gatherers settled down into agrarian civilizations, society had a need for more stable arrangements. The first recorded evidence of marriage ceremonies uniting one woman and one man dates from about 2350 B.C., in Mesopotamia. Over the next several hundred years, marriage evolved into a widespread institution embraced by the ancient Hebrews, Greeks, and Romans. But back then, marriage had little to do with love or with religion.

What was it about, then? Marriage's primary purpose was to bind women to men, and thus guarantee that a man's children were truly his biological heirs. Through marriage, a woman became a man's property. In the betrothal ceremony of ancient Greece, a father would hand over his daughter with these words: "I pledge my daughter for the purpose of producing legitimate offspring." Among the ancient Hebrews, men were free to take several wives married Greeks and Romans were free to satisfy their sexual urges with concubines, prostitutes, and even teenage male lovers, while their wives were required to stay home and tend to the household. If wives failed to produce offspring, their husbands could give them back and marry someone else.

When did religion become involved? As the Roman Catholic Church became a powerful institution in Europe, the blessings of a priest became a necessary step for a marriage to be legally recognized. By the eighth century, marriage was widely accepted in the Catholic church as a sacrament, or a ceremony to bestow God's grace. At the Council of Trent in 1563, the sacramental nature of marriage was written into canon law.

Did this change the nature of marriage? Church blessings did improve the lot of wives. Men were taught to show greater respect for their wives, and forbidden from divorcing them. Christian doctrine declared that "the twain shall be one flesh," giving husband and wife exclusive access to each other's body. This put new pressure on men to remain sexually faithful. But the church still held that men were the head of families, with their wives deferring to their wishes.

When did love enter the picture? Later than you might think. For much of human history, couples were brought together for practical reasons, not because they fell in love. In time, of course, many marriage partners came to feel deep mutual love and devotion. But the idea of romantic love, as a motivating force for marriage, only goes as far back as the Middle Ages. Naturally, many scholars believe the concept was "invented" by the French. Its model was the knight who felt intense love for someone else's wife, as in the case of Sir Lancelot and King Arthur's wife, Queen Guinevere. Twelfth-century advice literature told men to woo the object of their desire by praising her eyes, hair, and lips. In the 13th century, Richard de Fournival, physician to the king of France, wrote "Advice on Love," in which he suggested that a woman cast her love flirtatious glances—"anything but a frank and open entreaty."

Did love change marriage? It sure did. Marilyn Yalom, a Stanford historian and author of A History of the Wife, credits the concept of romantic love with giving women greater leverage in what had been a largely pragmatic transaction. Wives no longer existed solely to serve men. The romantic prince, in fact, sought to serve the woman he loved. Still, the notion that the husband "owned" the wife continued to hold sway for centuries. When colonists first came to America—at a time when polygamy was still accepted in most parts of the world—the husband's dominance was officially recognized under a legal doctrine called "coverture," under which the new bride's identity was absorbed into his. The bride gave up her name to symbolize the surrendering of her identity, and the husband suddenly became more important, as the official public representative of two people, not one. The rules were so strict that any American woman who married a foreigner immediately lost her citizenship.

How did this tradition change? Women won the right to vote. When that happened, in 1920, the institution of marriage began a dramatic transformation. Suddenly, each union consisted of two full citizens, although tradition dictated that the husband still ruled the home. By the late 1960s, state laws forbidding interracial marriage had been thrown out, and the last states had dropped laws against the use of birth control. By the 1970s, the law finally recognized the concept of marital rape, which up to that point was inconceivable, as the husband "owned" his wife's sexuality. "The idea that marriage is a private relationship for the fulfillment of two individuals is really very new," said historian Stephanie Coontz, author of The Way We Never Were: American Families and the Nostalgia Trap. "Within the past 40 years, marriage has changed more than in the last 5,000."


1800: Arrival of Placebo

It took another century before the emergence of another important mile stone in the history of modern clinical trial: the placebo. The word placebo first appeared in medical literature in the early 1800s. 1 Hooper's Medical Dictionary of 1811 defined it as 𠇊n epithet given to any medicine more to please than benefit the patient.” However, it was only in 1863 that United States physician Austin Flint planned the first clinical study comparing a dummy remedy to an active treatment. He treated 13 patients suffering from rheumatism with an herbal extract which was advised instead of an established remedy. In 1886, Flint described the study in his book A Treatise on the Principles and Practice of Medicine. “This was given regularly, and became well known in my wards as the ‘placeboic remedy’ for rheumatism. The favorable progress of the cases was such as to secure for the remedy generally the entire confidence of the patients.”


PHARMACOKINETICS AND ADMINISTRATION

The three most common methods of administration are inhalation via smoking, inhalation via vaporization, and ingestion of edible products. The method of administration can impact the onset, intensity, and duration of psychoactive effects effects on organ systems and the addictive potential and negative consequences associated with use. 34

Cannabinoid pharmacokinetic research has been challenging low analyte concentrations, rapid and extensive metabolism, and physicochemical characteristics hinder the separation of compounds of interest from biological matrices and from each other. The net effect is lower drug recovery due to adsorption of compounds of interest to multiple surfaces. 35 The primary psychoactive constituent of marijuana—Δ 9 -THC—is rapidly transferred from lungs to blood during smoking. In a randomized controlled trial conducted by Huestis and colleagues, THC was detected in plasma immediately after the first inhalation of marijuana smoke, attesting to the efficient absorption of THC from the lungs. THC levels rose rapidly and peaked prior to the end of smoking. 36 Although smoking is the most common cannabis administration route, the use of vaporization is increasing rapidly. Vaporization provides effects similar to smoking while reducing exposure to the byproducts of combustion and possible carcinogens and decreasing adverse respiratory syndromes. THC is highly lipophilic, distributing rapidly to highly perfused tissues and later to fat. 37 A trial of 11 healthy subjects administered Δ 9 -THC intravenously, by smoking, and by mouth demonstrated that plasma profiles of THC after smoking and intravenous injection were similar, whereas plasma levels after oral doses were low and irregular, indicating slow and erratic absorption. The time courses of plasma concentrations and clinical “high” were of the same order for intravenous injection and smoking, with prompt onset and steady decline over a four-hour period. After oral THC, the onset of clinical effects was slower and lasted longer, but effects occurred at much lower plasma concentrations than they did after the other two methods of administration. 38

Cannabinoids are usually inhaled or taken orally the rectal route, sublingual administration, transdermal delivery, eye drops, and aerosols have been used in only a few studies and are of little relevance in practice today. The pharmacokinetics of THC vary as a function of its route of administration. Inhalation of THC causes a maximum plasma concentration within minutes and psychotropic effects within seconds to a few minutes. These effects reach their maximum after 15 to 30 minutes and taper off within two to three hours. Following oral ingestion, psychotropic effects manifest within 30 to 90 minutes, reach their maximum effect after two to three hours, and last for about four to 12 hours, depending on the dose. 39

Within the shifting legal landscape of medical cannabis, different methods of cannabis administration have important public health implications. A survey using data from Qualtrics and Facebook showed that individuals in states with medical cannabis laws had a significantly higher likelihood of ever having used the substance with a history of vaporizing marijuana (odds ratio [OR], 2.04 99% confidence interval [CI], 1.62𠄲.58) and a history of oral administration of edible marijuana (OR, 1.78 99% CI, 1.39𠄲.26) than those in states without such laws. Longer duration of medical cannabis status and higher dispensary density were also significantly associated with use of vaporized and edible forms of marijuana. Medical cannabis laws are related to state-level patterns of utilization of alternative methods of cannabis administration. 34


All Timelines Overview

The story of vaccines did not begin with the first vaccine–Edward Jenner’s use of material from cowpox pustules to provide protection against smallpox. Rather, it begins with the long history of infectious disease in humans, and in particular, with early uses of smallpox material to provide immunity to that disease.

Evidence exists that the Chinese employed smallpox inoculation (or variolation, as such use of smallpox material was called) as early as 1000 CE. It was practiced in Africa and Turkey as well, before it spread to Europe and the Americas.

Edward Jenner’s innovations, begun with his successful 1796 use of cowpox material to create immunity to smallpox, quickly made the practice widespread. His method underwent medical and technological changes over the next 200 years, and eventually resulted in the eradication of smallpox.

Louis Pasteur’s 1885 rabies vaccine was the next to make an impact on human disease. And then, at the dawn of bacteriology, developments rapidly followed. Antitoxins and vaccines against diphtheria, tetanus, anthrax, cholera, plague, typhoid, tuberculosis, and more were developed through the 1930s.

The middle of the 20 th century was an active time for vaccine research and development. Methods for growing viruses in the laboratory led to rapid discoveries and innovations, including the creation of vaccines for polio. Researchers targeted other common childhood diseases such as measles, mumps, and rubella, and vaccines for these diseases reduced the disease burden greatly.

Innovative techniques now drive vaccine research, with recombinant DNA technology and new delivery techniques leading scientists in new directions. Disease targets have expanded, and some vaccine research is beginning to focus on non-infectious conditions such as addiction and allergies.

More than the science behind vaccines, these timelines cover cultural aspects of vaccination as well, from the early harassment of smallpox variolators (see the intimidation of a prominent minister described in the 1721 Boston Smallpox Epidemic entry) to the establishment of vaccination mandates, to the effect of war and social unrest on vaccine-preventable diseases. Edward Jenner, Louis Pasteur, and Maurice Hilleman, pioneers in vaccine development receive particular attention as well.

This timeline category holds nearly all of the entries for the subject-specific timelines. A few of the entries have been left out in order to provide a broad overview.

HIGHLIGHTS

Thomas Peebles collected blood from sick students at a private school outside of Boston in an attempt to isolate the measles virus. Eventually he succeeded, and the collected virus would be isolated and used to create a series of vaccines.

In 1905, Swedish physician Ivar Wickman suggested that that polio was a contagious disease that could be spread from person to person.

The first vaccine created in a laboratory was Louis Pasteur’s 1879 vaccine for chicken cholera.


History Of The Federal Use Of Eminent Domain

The federal government’s power of eminent domain has long been used in the United States to acquire property for public use. Eminent domain ''appertains to every independent government. It requires no constitutional recognition it is an attribute of sovereignty.” Boom Co. v. Patterson, 98 U.S. 403, 406 (1879). However, the Fifth Amendment to the U.S. Constitution stipulates: “nor shall private property be taken for public use, without just compensation.” Thus, whenever the United States acquires a property through eminent domain, it has a constitutional responsibility to justly compensate the property owner for the fair market value of the property. See Bauman v. Ross, 167 U.S. 548 (1897) Kirby Forest Industries, Inc. v. United States, 467 U.S. 1, 9-10 (1984).

The U.S. Supreme Court first examined federal eminent domain power in 1876 in Kohl v. United States. This case presented a landowner’s challenge to the power of the United States to condemn land in Cincinnati, Ohio for use as a custom house and post office building. Justice William Strong called the authority of the federal government to appropriate property for public uses “essential to its independent existence and perpetuity.” Kohl v. United States, 91 U.S. 367, 371 (1875).

The Supreme Court again acknowledged the existence of condemnation authority twenty years later in United States v. Gettysburg Electric Railroad Company. Congress wanted to acquire land to preserve the site of the Gettysburg Battlefield in Pennsylvania. The railroad company that owned some of the property in question contested this action. Ultimately, the Court opined that the federal government has the power to condemn property “whenever it is necessary or appropriate to use the land in the execution of any of the powers granted to it by the constitution.” United States v. Gettysburg Electric Ry., 160 U.S. 668, 679 (1896).

Condemnation: From Transportation to Parks

Eminent domain has been utilized traditionally to facilitate transportation, supply water, construct public buildings, and aid in defense readiness. Early federal cases condemned property for construction of public buildings (e.g., Kohl v. United States) and aqueducts to provide cities with drinking water (e.g., United States v. Great Falls Manufacturing Company, 112 U.S. 645 (1884), supplying water to Washington, D.C.), for maintenance of navigable waters (e.g., United States v. Chandler-Dunbar Co., 229 U.S. 53 (1913), acquiring land north of St. Mary’s Falls canal in Michigan), and for the production of war materials (e.g. Sharp v. United States, 191 U.S. 341 (1903)). The Land Acquisition Section and its earlier iterations represented the United States in these cases, thereby playing a central role in early United States infrastructure projects.

Condemnation cases like that against the Gettysburg Railroad Company exemplify another use for eminent domain: establishing parks and setting aside open space for future generations, preserving places of historic interest and remarkable natural beauty, and protecting environmentally sensitive areas. Some of the earliest federal government acquisitions for parkland were made at the end of the nineteenth century and remain among the most beloved and well-used of American parks. In Washington, D.C., Congress authorized the creation of a park along Rock Creek in 1890 for the enjoyment of the capitol city’s residents and visitors. The Department of Justice became involved when a number of landowners from whom property was to be acquired disputed the constitutionality of the condemnation. In Shoemaker v. United States, 147 U.S. 282 (1893), the Supreme Court affirmed the actions of Congress.

Today, Rock Creek National Park, over a century old and more than twice the size of New York City’s Central Park, remains a unique wilderness in the midst of an urban environment. This is merely one small example of the many federal parks, preserves, historic sites, and monuments to which the work of the Land Acquisition Section has contributed.

Land Acquisition in the Twentieth Century and Beyond

The work of federal eminent domain attorneys correlates with the major events and undertakings of the United States throughout the twentieth century. The needs of a growing population for more and updated modes of transportation triggered many additional acquisitions in the early decades of the century, for constructing railroads or maintaining navigable waters. Albert Hanson Lumber Company v. United States, 261 U.S. 581 (1923), for instance, allowed the United States to take and improve a canal in Louisiana.

The 1930s brought a flurry of land acquisition cases in support of New Deal policies that aimed to resettle impoverished farmers, build large-scale irrigation projects, and establish new national parks. Condemnation was used to acquire lands for the Shenandoah, Mammoth Cave, and Great Smoky Mountains National Parks. See Morton Butler Timber Co. v. United States, 91 F.2d 884 (6th Cir. 1937)). Thousands of smaller land and natural resources projects were undertaken by Congress and facilitated by the Division’s land acquisition lawyers during the New Deal era. For example, condemnation in United States v. Eighty Acres of Land in Williamson County, 26 F. Supp. 315 (E.D. Ill. 1939), acquired forestland around a stream in Illinois to prevent erosion and silting, while Barnidge v. United States, 101 F.2d 295 (8th Cir. 1939), allowed property acquisition for and designation of a historic site in St. Louis associated with the Louisiana Purchase and the Oregon Trail.

During World War II, the Assistant Attorney General called the Lands Division “the biggest real estate office of any time or any place.” It oversaw the acquisition of more than 20 million acres of land. Property was transformed into airports and naval stations (e.g., Cameron Development Company v. United States 145 F.2d 209 (5th Cir. 1944)), war materials manufacturing and storage (e.g., General Motors Corporation v. United States, 140 F.2d 873 (7th Cir. 1944)), proving grounds, and a number of other national defense installations.

Land Acquisition Section attorneys aided in the establishment of Big Cypress National Preserve in Florida and the enlargement of the Redwood National Forest in California in the 1970s and 1980s. They facilitated infrastructure projects including new federal courthouses throughout the United States and the Washington, D.C. subway system, as well as the expansion of facilities including NASA’s Cape Canaveral launch facility (e.g., Gwathmey v. United States, 215 F.2d 148 (5th Cir. 1954)).

The numbers of land acquisition cases active today on behalf of the federal government are below the World War II volume, but the projects undertaken remain integral to national interests. In the past decade, Section attorneys have been actively involved in conservation work, assisting in the expansion of Everglades National Park in Florida (e.g., U.S. v. 480.00 Acres of Land, 557 F.3d 1297 (11th Cir. 2009)) and the creation of Valles Caldera National Preserve in New Mexico. In the aftermath of the September 11, 2001 terrorist attacks, Land Acquisition Section attorneys secured space in New York for federal agencies whose offices were lost with the World Trade Towers. Today, Section projects include acquiring land along hundreds of miles of the United States-Mexico border to stem illegal drug trafficking and smuggling, allow for better inspection and customs facilities, and forestall terrorists.

Properties acquired over the hundred years since the creation of the Environment and Natural Resources Section are found all across the United States and touch the daily lives of Americans by housing government services, facilitating transportation infrastructure and national defense and national security installations, and providing recreational opportunities and environmental management areas.

For information on the history of the Land Acquisition Section, click here. To learn more about the range of projects undertaken by the Land Acquisition Section, click here to view the interactive map titled Where Our Cases Have Taken Us. And for more on the procedural aspects of eminent domain, click here to read about the Anatomy of a Condemnation Case.


The Horrors of War

Around 1930, Henri-Cartier Bresson and other photographers began to use small 35mm cameras to capture images of life as it occurred rather than staged portraits. When World War II started in 1939, many photojournalists adopted this style.

The posed portraits of World War I soldiers gave way to graphic images of war and its aftermath. Images such as Joel Rosenthal's photograph, Raising the Flag on Iwo Jima brought the reality of war home and helped galvanize the American people like never before. This style of capturing decisive moments shaped the face of photography forever.


Single-Action Pedal Harps (1770 AD)

Approximately 1720, a less cumbersome way to get some chromatic notes from a single-strung harp tuned diatonically was introduced. Five pedals (eventually seven) were housed in the bottom of the soundbox. When depressed they connected to hooks that would sharpen the strings of the same note via linkages that passed through the column. The hooks were quickly improved to crochets, which were right-angled rather than u-shaped hooks, then to bequilles, sets of two small levers in which each string wrapped through when a pedal was depressed, one lever would turn clockwise and the other counter-clockwise, providing a firmer grip. While a better system, they were prone to breakage and produced a buzzing noise.

Near the end of the 18th century, the single-action pedal harp was greatly improved. A model was introduced that had a soundbox built with a separate pine soundboard and a body that was reinforced with internal ribs. Brass action plates were attached to the outside of the harp neck, rather than inside providing strength to the linkage system. The most important improvement was the disc system. Two brass prongs (or forks) extended from a disc that a string passed through before attaching to the tuning peg. When the corresponding pedal was depressed, the discs turned and the strings sharpened a semitone, held firmly against the prong.


Watch the video: Tombstone Doc Holliday meets Johnny Ringo