Tag: Learn Science

Learn Science

  • Hydroxylation Essay Fungal Mechanism Steroids Microbiological

    Hydroxylation Essay Fungal Mechanism Steroids Microbiological

    Hydroxylation in Microbiological Essay, Fungal, Mechanism, and also Steroids, discuss in there; The hydroxylation of a compound is a very important metabolic process, in humans; this process catalyzes by cytochrome P450 enzymes and results in products with higher polarity than the parent compound, and thus aiding its excretion from the body. The process of hydroxylation, involves the conversion of a carbon-hydrogen to a carbon-hydroxyl bond, and when catalyzed by the enzyme hydroxylase, the reaction is more regio- and stereospecific in contrast to the conventional chemical process. As a result, microbial hydroxylation is rather used for the synthesis of hydroxysteroids.

    Here is the article to explain, Essay What is the Hydroxylation? with also understand Fungal, Mechanism, and Steroids in Microbiological!

    Fungal hydroxylation of steroids continues to be the focus of attention at different levels of research and product development. Despite its popularity, this process does not fully understand; because few studies have been conducted on the hydroxylase enzyme due to the difficulty in isolating this enzyme. However, most studies have shown that the cytochrome P450 enzyme is also responsible for steroid hydroxylation in filamentous fungi. Cytochrome P450 (CYP 450) enzyme is an iron-haem system that carries out a wide range of biocatalytic transformations. These enzymes are additionally recognized as monooxygenases because they transfer one atom of molecular oxygen to an organic substrate.

    Mechanism;

    The catalytic mechanism for this reaction involves the binding of the substrate to the active site of the enzyme and then the displacement of a water molecule. This follows by a reduction of the iron in the CYP 450-haem complex to its ferrous state (Iron II) by an electron transfer. The ferrous state then binds to molecular oxygen to form a ferrous-deoxy (Iron (III)-OOH) species. This species then loses a hydroxyl anion to form an iron (IV)-oxygen radical. This radical may withdraw a hydrogen atom from the substrate to generate a carbon radical and an iron (IV)-hydroxyl species. The carbon radical then accepts a hydroxyl radical from the iron (IV)-hydroxyl species to form a hydroxylated product and iron (III). A simple general reaction equation for this process summarize below; (where R represents the substrate and NADPH is the electron transferring species).

    RH + NADPH + H+ + O2 → ROH + NADP+ + H2O

    In other to fully understand the mechanism of fungal hydroxylation of steroids; the relationship between the structure of the CYP 450 hydroxylase enzyme and its regio- and stereoselective characteristic has to define. However, as mentioned earlier not many studies have been carried on the structural features of this enzyme; and so ‘active site models’ did develop to grasp the concept of the Regio- and stereoselective outcome of microbial hydroxylation reactions.

    Model;

    The first model, postulated by Brannon et al suggest the possibility for a steroidal substrate to dress by a single steroid hydroxylase in more than one orientation due to two- sites binding; which could result in hydroxylation taking place at more than one position given the appropriate geometrical relationship between the active site of the enzyme; and the carbon atom of the substrate undergoing the reaction. These four orientations represent normal, reverse, inverted, and reverse inverted; and have been observing in the metabolic handling of 3β-hydroxy-17a-OXA-D-homo-5α-androstane-17-one by a filamentous fungus; Aspergillus tamari.

    The other model, Jones’ model takes into account only the normal and reverse binding orientations. It requires the existence of three active centers on the steroid hydroxylase enzyme. These active centers have dual roles and could act both as a binding site or a hydroxylating site. However, these roles are mutually exclusive, and so hydroxylation would occur at the closest nuclear center to the steroid. Hence the enzyme-substrate interaction proposed by Jones would suggest a triangular location with an approximate spatial correspondence to C-3, C-11, and C-16 atoms of the steroid nucleus.

    Reactions;

    This model could not explain the hydroxylation reactions by some microorganisms. Therefore another theory did develop by McCrindle et al using both models above; and taking into account the 3- D nature of the steroid compound and hydroxylase enzyme. In this model, the steroid ring acts as a planar reference point. Binding site A favors oxygen atoms below the plane of the ring and hydroxylation is alpha. Binding site B is similar to A but can also be hydroxylated alpha (axial or equatorial) or beta (equatorial) atoms. Whereas, binding site C binds preferentially to oxygen atoms above the plane of the steroid ring and hydroxylate with -beta orientation. Overall, this model tends to fit the hydroxylation pattern of most microorganisms.

    Hydroxylation outcome;

    The hydroxylation outcome of some steroids can predict based on the oxygen functions or ‘directing groups’ on the steroid skeleton. As a rule of thumb mono- oxygenated substrates are dihydroxylation and their transformation products are often in low yields. This is a result of the presence of one oxygen function on the steroid compound making it less polar; and, thus decreasing its solubility which hinders its permeation into the microbial cell. In addition to this, the presence of only one oxygen function allows the steroid to bind to the enzyme at only one center; thereby increasing its rotation and oscillation about the active site which makes it more likely to be hydroxylation.

    Whereas, di- oxygenated substrates are mono-hydroxylated; because the presence of two oxygen functions reduces the chance of multiple hydroxylations due to the reduction in the possible number of binding orientations. Furthermore, the presence of two binding oxygen groups increases the rate of reactivity of microbiological transformation as the increased substrate polarity improves solubility; and thus permeation into the cell membrane of the microorganism is very likely. A wide variety of organisms have shown this pattern of hydroxylation with a wide range of substrates.

    Hydroxylated steroids possess;

    Hydroxylated steroids possess useful pharmacological activities; for example, C-11 hydroxylation regards as essential for anti-inflammatory action; and 16α- hydroxylated steroids have increased glucocorticoid activity. Hence the steroid industry exploits the use of 11α-, 11β-, 15α- and 16α- hydroxylation mainly for the production of adrenal cortex hormones and their analogs. A range of microorganisms follows to affect this type of hydroxylation. For example, 11α- hydroxylation perform using Rhizopus sp. Or Aspergillus sp., Cuvularia sp. or Cunninghamella sp. and Streptomyces sp. generates 11β- and 16α- hydroxylations respectively. Further research has shown other hydroxylations (e.g. 7α-, 9α- and 14α- hydroxylations) of having the potential for industrial exploitation.

    Hydroxylation Essay Fungal Mechanism Steroids in Microbiological Image
    Hydroxylation Essay, Fungal, Mechanism, and Steroids in Microbiological!

    References; Microbiological Transformation of Steroids. Retrieved from https://www.ukessays.com/essays/sciences/microbiological-transformation-of-steroids.php?vref=1

  • Difference between Positive and Normative Economics

    Difference between Positive and Normative Economics

    Positive and Normative Economics: Economics is often divided into two major aspects – positive and normative. Positive economics explains how the world works. The primary difference between Positive and Normative Economics; con­cerns with what is, rather than with what ought to be. Normative economics is concerning what ought to be rather than what is. It proposes solutions to society’s economic problems. That there is unemployment in India is a problem of positive economics. What measures can adopt to solve the problem is a problem of normative economics. Normative economics also knows as welfare Eco­nomics.

    How to Explain the difference between Positive and Normative Economics?

    The distinction between positive economics and normative economics may seem simple, but it is not always easy to differentiate between the two. Positive economics is objective and fact-based, while normative economics is subjective and value-based. Positive economic statements must be able to test and prove or disprove. Normative economic statements are opinion based, so they cannot prove or disprove. Many widely-accepted statements that people hold as fact are value-based.

    For example, the statement, “government should provide basic healthcare to all citizens” is a normative economic statement. There is no way to prove whether the government “should” provide healthcare; this statement is based on opinions about the role of government in individuals’ lives, the importance of healthcare, and who should pay for it.

    The statement, “government-provided healthcare increases public expenditures” is a positive economic statement, as it can prove or disprove by examining healthcare spending data in countries like Canada and Britain, where the government provides healthcare.

    Disagreements over public policies typically revolve around normative economic statements, and the disagreements persist because neither side can prove that it is correct or that its opponent is incorrect. A clear understanding of the difference between positive and normative economics should lead to better policy-making if policies are made based on facts (positive economics), not opinions (normative economics). Nonetheless, numerous policies on issues ranging from international trade to welfare are at least partially based on normative economics.

    Positive Science or Normative Science!

    Positive science implies that science which establishes the relationship between cause and Ef­fect. In other words, it scientifically analyses a problem and examines the causes of a problem. For example, if prices have gone up, why have they gone up.

    In short, problems are examining based on facts. On the other hand, normative science relates to normative aspects of a problem i.e., what ought to be. Under normative science, conclusions and results are not based on facts, rather they are based on different considerations like social, cultural, political, religious, and son are is subjective, an expression of opinions.

    In short, positive science concerns with “how and why” and normative science with ‘what ought to be’. The distinction between the two can explain with the help of an example of an increase in the rate of interest. Under positive science it would look into why the interest rate has gone up and how can it reduce whereas under normative science it would see as to whether this increase is good or bad. Three statements about positive and normative science each are given below:

    Positive Science:

    The following topic below are;

    • The main cause of price-rise in India is the increase in the money supply.
    • It bases on a set of collected facts.
    • Prices and inequalities of income level in an economy.
    • Production of food grains in India has increased mainly because of an increase in irrigation facilities and the consumption of chemical fertilizers.
    • The rate of population growth has been very high partly because of the high birth rate and partly because of the decline in the death rate.
    • Studies with what is or how the economic problem originally solves.
    • It can verify with the original data.
    • It aims to provide an original description of economic activity.
    Normative Science:

    The following topic below are;

    • Inflation is better than deflation.
    • It bases on the opinion of the individual.
    • The government should generate more employment opportunities.
    • More production of luxury goods is not good for a poor country like India.
    • Inequalities in the distribution of wealth and incomes should reduce.
    • Studies with what ought or how the economic problem should solve.
    • It cannot verify with the original data.
    • It aims to determine the principles.

    Difference between Positive and Normative Economics
    Difference between Positive and Normative Economics

    Positive and Normative Economic Statements:

    The following Statements topic below are;

    Positive statements: Positive statements are objective statements that can test, amend, or reject by referring to the available evidence. Positive economics deals with objective explanations and the testing and rejection of theories. For example; A fall in incomes will lead to a rise in demand for own-label supermarket foods. And, If the government raises the tax on beer, this will lead to a fall in profits of the brewers.

    Normative Statements: A value judgment is a subjective statement of opinion rather than a fact that can be tested by looking at the available evidence. Normative statements are subjective statements – i.e. they carry value judgments. For example; Pollution is the most serious economic problem. Unemployment is more harmful than inflation, and. The government is right to introduce a ban on smoking in public places.

  • Treatment of 10 Yoga Poses Better Help Your Back Pain

    Treatment of 10 Yoga Poses Better Help Your Back Pain

    Treatment of 10 Yoga Poses Better Help Your Back Pain


    What is Yoga? Derived from the Sanskrit word yuj, Yoga means union of the individual consciousness or soul with the Universal Consciousness or Spirit. Yoga is a 5000-year-old Indian body of knowledge. Though many think of yoga only as a physical exercise where people twist, turn, stretch, and breathe in the most complex ways, these are actually only the most superficial aspect of this profound science of unfolding the infinite potentials of the human mind and soul. The science of Yoga imbibes the complete essence of the Way of Life. Yoga (/ˈjoʊɡə/; Sanskrit, योग) is a group of physical, mental, and spiritual practices or disciplines which originated in ancient India. There is a broad variety of yoga schools, practices, and goals in Hinduism, Buddhism, and Jainism. Among the most well-known types of yoga are Hatha yoga and Rāja yoga.

    Treatment-of-10-Yoga-Poses-Better-Help-Your-Back-Pain
    Yoga class

    For most of our lives, we take our backs for granted. But at some point in just about everyone’s life, our backs revolt and remind us that they need love and attention too. Thankfully, for many of us, the pain is only temporary. Yoga Poses Better Help Your Back Pain, But for others, it can be much more debilitating and much more frustrating. How to Understanding and Treating Back Pain in the Modern Workplace?
     
    In severe cases, medical attention may be necessary, but if your pain is less severe, yoga may be able to help by strengthening the back, stretching it and improving circulation to the spine and nerves. Here are some yoga postures for back pain. “A Choice to Make” an Architect Short Film.

    1. Yoga Poses Cat/Cow Back

    1-Yoga-Poses-Cat-Cow-Back

    Starting in tabletop position on your hands and knees, alternate between arching your back and rounding it as you push down on the floor with your hands and tops of your feet. These postures help to massage the spine, while also stretching the back and the torso. These postures are a great way to keep the back limber and happy.

    2. Yoga Poses Spinal Twist

    2-Yoga-Poses-Spinal-Twist

    You have many options when it comes to twisting postures. One of the basic and effective ones is Marichyasana C. Keep your left leg straight and bend your right leg so your foot is flat. Place your right hand on the floor behind you for support, like a tripod, and twist so you can hook your left elbow over the right thigh.

    If this is too much, you can also grab hold of your right knee and twist to look over your right shoulder. Other options are to bend the left leg under you or bend both legs and let them fall to the side then twist in whichever way your knees are facing.

    3. Yoga Poses Downward Dog

    3-Yoga-Poses-Downward-Dog

    There’s a reason Down Dog is one of the most iconic postures in yoga. It can rejuvenate your entire body. Start in tabletop and raise your hips so your body is in an upside down V position. Relax your head and neck and draw your inner thighs toward the back of the room. Spreading your shoulder blades apart will stretch your upper back even more, and reaching your hips up and back will help to open your lower back. Breathe for five to seven breaths.

    4. Yoga Poses Plow Pose

    4-Yoga-Poses-Plow-Pose

    From Shoulder Stand, bend at your hips to bring your toes or top of your feet to the floor. Your hands can remain against your back for support, or you can clasp them together, keeping your forearms on the floor. Hold this as long as is comfortable to get a powerful stretch in your shoulders and spine. If this is too much, you can place a chair behind and you rest your feet on the chair.

    5. Yoga Poses Seated Forward Fold

    5-Yoga-Poses-Seated-Forward-Fold

    It’s easy to do a Seated Forward Fold in a way that won’t benefit you, but doing it right can open the lower back and offer relief from stiffness and pain. From a seated position with your legs extended forward, reach for your shins, ankles, or feet, bending at the hips.

    Instead of rounding your back, continue to reach your sternum forward, lengthening the torso. If this hurts your back, bend your knees as needed.

    6. Yoga Poses Child’s Pose

    6-Yoga-Poses-Childs-Pose

    Not only is Child’s Pose an amazing way to relax, it can also stretch your entire back and your hips. Start on all fours, keep your arms forward and sit back so your butt is resting just above your heels. Hold and breathe deeply, feeling the breath reach all the way into your hips. The more you extend in either direction, the more you’ll feel relief.

    7. Yoga Poses Eagle Pose

    7-Yoga-Poses-Eagle-Pose

    This more advanced posture requires balance and strength, but it can help to stretch and open your entire back. From Mountain Pose, with your knees slightly bent, lift your right leg and reach your right thigh over your left. Point your foot toward the floor, and either stop here and balance with your toes on the floor, or hook your right foot behind your left calf.

    For the arms, bring the right arm under the left and, with elbows bent, bring your palms together. You’ll get a powerful stretch by drawing your elbows up and hands away from your face.

    8. Yoga Poses Locust Pose

    8-Yoga-Poses-Locust-Pose

    Locust is a great way to strengthen your back and buttocks. Lie on your stomach with your arms beside you, palms up, you and your forehead flat on the floor. Slowly lift your head, torso, arms, and legs away from the floor. As you do this, your thighs should be rotated in slightly and you want to feel your body elongate from head to toe. Hold this for 30 seconds to a minute. If you’re up for it, relax and repeat two to three times.

    9. Yoga Poses Bow Pose

    9-Yoga-Poses-Bow-Pose

    Lying face down, reach your hands toward your ankles and grab hold one at a time. Slowly lift your chest and thighs away from the floor by drawing your chest forward and the back of your thighs toward the sky. This posture is a wonderful way to strengthen the back muscles, but if you have a back injury, take this easy as it can be intense.

    10. Yoga Poses Triangle Pose

    10-Yoga-Poses-Triangle-Pose

    Back pain can be helped, and in some cases prevented, with stretching and strengthening and Triangle Pose can do both.

    Stand with your feet about three feet apart and parallel to each other. Rotate your right foot so the right heel is in line with the arch of the left foot. With your arms extended to the side, tilt at the hip to reach your right hand toward the ground, on either side of your foot. Rotate your body to the side and reach the fingers of your left hand toward the sky.

    Gaze at your left hand (as long as it doesn’t hurt your neck!) and hold for five to seven breaths before switching sides.

    When it comes to back pain, prevention is key to a long and pain-free life, but listening to your body is also extremely important. Don’t force any posture that could cause injury. If your pain is extreme, you may want to seek medical attention.


  • What is Pollution and Types of Environmental Pollution?

    What is Pollution and Types of Environmental Pollution?

    Learn about the different types of environmental pollution and their impact on the natural environment. Find out how pollution can cause adverse changes and harm ecosystems.

    What is Pollution and Types of Environmental Pollution?

    Pollution is the introduction of contaminants into the natural environment that causes adverse change. Pollution can take the form of chemical substances or energy, such as noise, heat, or light. Pollutants, the components of pollution, can be either foreign substances/energies or naturally occurring contaminants. Pollution is often classed as point-source or nonpoint-source pollution.

    The meaning of Pollution: “The presence in or introduction into the environment of a substance which has harmful or poisonous effects.”

    History of Pollution:

    Air pollution has always accompanied civilizations. Pollution started in prehistoric times when man created the first fires. According to a 1983 article in the journal Science, “soot” found on ceilings of prehistoric caves provides ample evidence of the high levels of pollution that was associated with inadequate ventilation of open fires.” Metal forging appears to be a key turning point in creating significant air pollution levels outside the home. Core samples of glaciers in Greenland indicate increases in pollution associated with Greek, Roman, and Chinese metal production. Still, at that time the pollution was comparatively small and could be handled by nature.

    What is Environmental Pollution?

    Pollution, also called environmental pollution, is the addition of any substance (solid, liquid, or gas) or any form of energy (such as heat, sound, or radioactivity) to the environment at a rate faster than it can dispersed, diluted, decomposed, recycled, or stored in some harmless form. The major kinds of pollution are (classified by environment) air pollution, water pollution, and land pollution. Modern society is also concerned about specific types of pollutants, such as noise pollution, light pollution, and even plastic pollution.

    Although environmental pollution can cause by natural events such as forest fires and active volcanoes. The use of the word pollution generally implies that the contaminants have an anthropogenic source—that is, a source created by human activities. Pollution has accompanied humankind ever since groups of people first congregated and remained for a long time in any one place. Indeed, ancient human settlements frequently recognized by their pollutants—shell mounds and rubble heaps. Pollution was not a serious problem as long as there was enough space available for each individual or group. However, with the establishment of permanent settlements by great numbers of people, pollution became a problem, and it has remained one ever since.

    Cities of ancient times were often noxious places, fouled by human wastes and debris. Beginning about 1000ce, the use of coal for fuel caused considerable air pollution, and the conversion of coal to coke for iron smelting beginning in the 17th century exacerbated the problem. In Europe, from the Middle Ages well into the early modern era. Unsanitary urban conditions favored the outbreak of population-decimating epidemics of disease, from plague to cholera and typhoid fever. Through the 19th century, water and air pollution and the accumulation of solid wastes were largely problems of congested urban areas. But, with the rapid spread of industrialization and the growth of the human population to unprecedented levels, pollution became a universal problem.

    By the middle of the 20th century, an awareness of the need to protect air, water, and land environments from pollution had developed among the general public. In particular, the publication in 1962 of Rachel Carson’s book Silent Spring focused attention on the environmental damage caused by improper use of pesticides. Such as DDT and other persistent chemicals that accumulate in the food chain and disrupt the natural balance of ecosystems on a wide scale.

    The presence of environmental pollution raises the issue of pollution control. Great efforts made to limit the release of harmful substances into the environment. Through air pollution control, wastewater treatment, solid-waste management, hazardous waste management, and recycling.

    Types of Environmental Pollution

    The major types of environmental pollution listed below along with the particular contaminant relevant to each of them:

    Air pollution: the release of chemicals and particulates into the atmosphere. Common gaseous pollutants include carbon monoxide, sulfur dioxide, chlorofluorocarbons (CFCs), and nitrogen oxides produced by industry and motor vehicles. Photochemical ozone and smog create as nitrogen oxides and hydrocarbons react to sunlight. Particulate matter or fine dust characterize by its micrometer size from PM10 to PM2.5.

    Light pollution: includes light trespass, over-illumination, and astronomical interference.

    Littering: the criminal throwing of inappropriate synthetic objects, unremoved, onto public and private properties.

    Noise pollution: which encompasses roadway noise, aircraft noise, industrial noise as well as high-intensity sonar.

    Soil contamination occurs when chemicals released by spill or underground leakage. Among the most significant soil contaminants are hydrocarbons, heavy metals, MTBE, herbicides, pesticides, and chlorinated hydrocarbons.

    Radioactive contamination, results from 20th-century activities in atomic physics. Such as nuclear power generation and nuclear weapons research, manufacture, and deployment. (See alpha emitters and actinides in the environment.)

    Thermal pollution is a temperature change in natural water bodies caused by human influence. Such as the use of water as coolant in a power plant.

    Visual pollution, can refer to the presence of overhead power lines, motorway billboards, scarred landforms (as from strip mining), open storage of trash, municipal solid waste, or space debris.

    Water pollution, by the discharge of wastewater from commercial and industrial waste (intentionally or through spills) into surface waters. Discharges of untreated domestic sewage, and chemical contaminants, such as chlorine, from treated sewage. Release of waste and contaminants into surface runoff flowing to surface waters (including urban runoff and agricultural runoff, which may contain chemical fertilizers and pesticides). Waste disposal and leaching into groundwater; eutrophication and littering.

    Plastic pollution: involves the accumulation of plastic products in the environment that adversely affects wildlife, wildlife habitat, or humans.

  • What are the Characteristics of the Troposphere?

    What are the Characteristics of the Troposphere?

    The characteristics of the Troposphere: The atmosphere has a multi-layered structure consisting of the following basic layers. Troposphere, Stratosphere, Mesosphere, Ionosphere, and Exosphere. The word troposphere derives from the Greek word Tropo and it means turbulence or mixing. This is the lowermost layer of the atmosphere and is known as the troposphere and is the most important lowest layer of earth surface because almost all the weather events ( e.g fog, cloud, due, frost, hailstorm, storms, cloud-thunder, lightning, etc.) occur in this lowest layer. Thus the troposphere is of the utmost significance for all life forms including man because these are concentrated in the lowermost portion of the atmosphere.

    Here explains; What are the Characteristics of the Troposphere? Read and learn.

    Temperature decreases with increasing height at the average rate of 6.50 C per 1000m (1 kilometer) Which is called a normal lapse rate. The height of the troposphere changes from the equator towards the poles (decreases) and from one season of a year to the other season(increases during summer while decreases during winter). The average height of the troposphere is about 16km over the equator and 6km over the poles. The upper limits of the troposphere are called TROPOPAUSE.

    What is the Importance of the Troposphere?

    The troposphere provides several important benefits: it holds nearly all of the water vapor in the Earth’s atmosphere, regulates temperature, and produces weather. The troposphere forms the lowest level of the Earth’s atmosphere, extending down to the surface of the Earth. This lowest layer also features the heaviest weight of all earth surface atmosphere layers, comprising approximately 75 percent of the total atmospheric weight.

    The troposphere varies in thickness and height around the world. At its highest point, the troposphere extends 12 miles into the air. At its lowest point, this layer reaches 4 miles above sea level. Regardless of height, the troposphere facilitates temperature regulation and cloud formation. It contains the highest temperatures closer to its base; these warm temperatures help the troposphere retain water vapor, which releases in the form of precipitation.

    The troposphere also serves as the starting point for the Earth’s water cycle. This process begins when the sun pulls water into the atmosphere through evaporation. Water then cools and condenses, forming clouds. Clouds store water particles, which are released in the form of rain, sleet, or snow depending on the time of year and region. The troposphere also traps gases, such as carbon dioxide and nitrogen. Excess accumulation of these substances creates environmental problems, such as smog and air pollution.

    Characteristics of the Troposphere:

    The following Characteristics below are:

    • Most of the weather phenomena take place in this lowest layer. The troposphere contains almost all the water vapor and most of the dust.
    • This layer subject to intense mixing due to both horizontal and vertical mixing.
    • Temperature decreases with height at an average rate of 10C per 167m of height above sea level. This calls the normal lapse rate.
    • The troposphere extends up to a height of about 18km at the equator and declines gradually to a height of 8km at the poles.
    • The upper limit of the troposphere calls the tropopause. The temperature stops decreasing in it. It may be as low as -580C.

    All-weather changes occur in the troposphere. Since it contains most of the water vapor, clouds form in this layer of earth surface.

    Frequently Asked Questions (FAQs)

    What is the troposphere?

    The troposphere is the lowest layer of earth’s surface atmosphere, where almost all weather events occur, including clouds, rain, and storms. It extends from the Earth’s surface to an average height of about 16 km over the equator and about 6 km over the poles.

    Why is the troposphere important?

    The troposphere is vital for life on earth surface as it holds nearly all the water vapor in the atmosphere, regulates temperature, and is the primary site for weather formation. It also plays a crucial role in the water cycle.

    How does temperature change in the troposphere?

    In the troposphere, temperature decreases with increasing height at an average rate of 6.5°C for every 1000 meters (1 kilometer) of elevation. This phenomenon is known as the normal lapse rate.

    What is the upper limit of the troposphere called?

    The upper boundary of the troposphere is known as the tropopause. At this boundary, the temperature stops decreasing, and it may reach temperatures as low as -58°C.

    What gases are found in the troposphere?

    The troposphere contains essential gases such as nitrogen and carbon dioxide. However, an excess accumulation of these gases can lead to environmental issues, including smog and air pollution.

    How thick is the troposphere?

    The troposphere varies in thickness globally; it can reach up to 18 km at the equator and as low as 8 km at the poles.

  • What is the Troposphere?

    What is the Troposphere?

    What is the Troposphere? It is the lowest portion of Earth’s atmosphere and is also where nearly all weather takes place. It contains approximately 75% of the atmosphere’s mass and 99% of the total mass of water vapor and aerosols. The average depths of the troposphere are 20 km (12 mi) in the tropics, 17 km (11 mi) in the mid-latitudes, and 7 km (4.3 mi) in the polar regions in winter. The lowest part of them, where friction with the Earth’s surface influences airflow, is the planetary boundary layer. Also, This layer is typically a few hundred meters to 2 km (1.2 mi) deep depending on the landform and time of day.

    Here read and learn; What is the Troposphere? Meaning and Definition.

    Atop the troposphere is the tropopause, which is the border between the troposphere and stratosphere. The tropopause is an inversion layer, where the air temperature ceases to decrease with height and remains constant through its thickness.

    The word troposphere derives from the Greek: Tropos for “turn, turn toward, trope” and “-sphere” (as in, the Earth), reflecting the fact that rotational turbulent mixing plays an important role in the troposphere’s structure and behavior. As well as Most of the phenomena associated with day-to-day weather occur in them.

    The Troposphere:

    It is the lowest major atmospheric layer, extending from the Earth’s surface up to the bottom of the stratosphere. Also, It is where all of Earth’s weather occurs. It contains approximately 80% of the total mass of the atmosphere.

    It characterizes by decreasing temperature with height (at an average rate of 3.5 degrees F per thousand feet, or 6.5 degrees C per kilometer). In contrast, the stratosphere has either constant or slowly increasing temperatures with height.

    The boundary between the troposphere and the stratosphere is called the “tropopause”, located at an altitude of around 5 miles in the winter, to around 8 miles high in the summer, and as high as 11 or 12 miles in the deep tropics.

    When you see the top of a thunderstorm flatten out into an anvil cloud. It is usually because the updrafts in the storm have reached the tropopause. Where the environmental air is warmer than the cloudy air in the storm, and so the cloudy air stops rising.

    Definition of The Troposphere:

    The lowest densest part of the earth’s atmosphere in which most weather changes occur and temperature generally decreases rapidly with altitude and which extends from the earth’s surface to the bottom of the stratosphere at about 7 miles (11 kilometers) high.

    Overview of The Troposphere:

    It is the lowest layer of Earth’s atmosphere. Most of the mass (about 75-80%) of the atmosphere is in them. Most types of clouds are found there, and almost all weather occurs within this layer.

    The bottom of them is at Earth’s surface. Also, They extend upward to about 10 km (6.2 miles or about 33,000 feet) above sea level. The height of the top of them varies with latitude (it is lowest over the poles and highest at the equator) and by season (it is lower in winter and higher in summer). Also, It can be as high as 20 km (12 miles or 65,000 feet) near the equator, and as low as 7 km (4 miles or 23,000 feet) over the poles in winter.

    Air is warmest at the bottom of the troposphere near the ground level. Also, Air gets colder as one rises through them. That’s why the peaks of tall mountains can be snow-covered even in the summertime.

    Air pressure and the density of the air also decrease with altitude. That’s why the cabins of high-flying jet aircraft pressurize.

    The layer immediately above them calls the stratosphere. Also, The boundary between the troposphere and the stratosphere calls the “tropopause“.

    Frequently Asked Questions

    What is the Troposphere?

    The troposphere is the lowest portion of Earth’s atmosphere, extending from the Earth’s surface up to the bottom of the stratosphere. It is where nearly all weather occurs and contains about 75-80% of the atmosphere’s mass.

    How deep is the Troposphere?

    The average depth of the troposphere varies by location: it is about 20 km (12 mi) in the tropics, 17 km (11 mi) in the mid-latitudes, and 7 km (4.3 mi) in the polar regions during winter.

    What is the Tropopause?

    The tropopause is the boundary layer between the troposphere and the stratosphere. It is characterized by a temperature inversion, meaning the air temperature remains constant or increases with altitude.

    Why does temperature decrease in the Troposphere?

    Temperature in the troposphere generally decreases with altitude at an average rate of 3.5 degrees F per thousand feet (6.5 degrees C per kilometer) due to the thermal structure and the influence of the Earth’s surface heating.

    What types of weather phenomena occur in the Troposphere?

    Most types of weather phenomena, including clouds, rain, thunderstorms, and wind patterns, occur within the troposphere.

    How does altitude affect air pressure in the Troposphere?

    Air pressure and density decrease with altitude in the troposphere. This change is significant, which is why high-flying jets must pressurize their cabins to ensure passenger comfort and safety.

    Where is the warmest air located in the Troposphere?

    The air is warmest at the bottom of the troposphere, near ground level. As altitude increases, the temperature decreases, leading to colder air at higher elevations.

    How does the height of the Troposphere change?

    The height of the troposphere varies depending on latitude and season. It is generally higher in tropical regions (up to 20 km) and lower over polar regions (as low as 7 km in winter).

    What happens to updrafts in a thunderstorm as they reach the Tropopause?

    When the updrafts in a thunderstorm reach the tropopause, they encounter warmer environmental air, which stops the cloudy air from rising further, often causing the characteristic anvil shape of thunderstorm clouds.

  • Validity

    Validity

    What is Validity?


    The most crucial issue in test construction is validity. Whereas reliability addresses issues of consistency, validity assesses what the test is to be accurate about. A test that is valid for clinical assessment should measure what it is intended to measure and should also produce information useful to clinicians. A psychological test cannot be said to be valid in any abstract or absolute sense, but more practically, it must be valid in a particular context and for a specific group of people (Messick, 1995). Although a test can be reliable without being valid, the opposite is not true; a necessary prerequisite for validity is that the test must have achieved an adequate level of reliability. Thus, a valid test is one that accurately measures the variable it is intended to measure. For example, a test comprising questions about a person’s musical preference might erroneously state that it is a test of creativity. The test might be reliable in the sense that if it is given to the same person on different occasions, it produces similar results each time. However, it would not be reliable in that an investigation might indicate it does not correlate with other more valid measurements of creativity.

    Establishing the validity of a test can be extremely difficult, primarily because psychological variables are usually abstract concepts such as intelligence, anxiety, and personality. These concepts have no tangible reality, so their existence must be inferred through indirect means. In addition, conceptualization and research on constructs undergo change over time requiring that test validation go through continual refinement (G. Smith & McCarthy, 1995). In constructing a test, a test designer must follow two necessary, initial steps. First, the construct must be theoretically evaluated and described; second, specific operations (test questions) must be developed to measure it (S. Haynes et al., 1995). Even when the designer has followed these steps closely and conscientiously, it is sometimes difficult to determine what the test really measures. For example, IQ tests are good predictors of academic success, but many researchers question whether they adequately measure the concept of intelligence as it is theoretically described. Another hypothetical test that, based on its item content, might seem to measure what is described as musical aptitude may in reality be highly correlated with verbal abilities. Thus, it may be more a measure of verbal abilities than of musical aptitude.

    Any estimate of validity is concerned with relationships between the test and some external independently observed event. The Standards for Educational and Psychological Testing, American Educational Research Association [AERA], American Psychological Association [APA], & National Council for Measurement in Education [NCME], 1999; G. Morgan, Gliner, & Harmon, 2001) list the three main methods of establishing validity as content-related, criterion-related, and construct-related.

    Content Validity


    During the initial construction phase of any test, the developers must first be concerned with its content validity. This refers to the representativeness and relevance of the assessment instrument to the construct being measured. During the initial item selection, the constructors must carefully consider the skills or knowledge area of the variable they would like to measure. The items are then generated based on this conceptualization of the variable. At some point, it might be decided that the item content over-represents, under-represents, or excludes specific areas, and alterations in the items might be made accordingly. If experts on subject matter are used to determine the items, the number of these experts and their qualifications should be included in the test manual. The instructions they received and the extent of agreement between judges should also be provided. A good test covers not only the subject matter being measured, but also additional variables. For example, factual knowledge may be one criterion, but the application of that knowledge and the ability to analyze data are also important. Thus, a test with high content validity must cover all major aspects of the content area and must do so in the correct proportion.

    A concept somewhat related to content validity is face validity. These terms are not synonymous, however, because content validity pertains to judgments made by experts, whereas face validity concerns judgments made by the test users. The central issue in face validity is test rapport. Thus, a group of potential mechanics who are being tested for basic skills in arithmetic should have word problems that relate to machines rather than to business transactions. Face validity, then, is present if the test looks good to the persons taking it, to policymakers who decide to include it in their programs, and to other untrained personnel. Despite the potential importance of face validity in regard to test-taking attitudes, disappointingly few formal studies on face validity are performed and/or reported in test manuals.

    In the past, content validity has been conceptualized and operationalized as being based on the subjective judgment of the test developers. As a result, it has been regarded as the least preferred form of test validation, albeit necessary in the initial stages of test development. In addition, its usefulness has been primarily focused at achievement tests (how well has this student learned the content of the course?) and personnel selection (does this applicant know the information relevant to the potential job?). More recently, it has become used more extensively in personality and clinical assessment (Butcher, Graham, Williams, & Ben-Porath, 1990; Millon, 1994). This has paralleled more rigorous and empirically based approaches to content validity along with a closer integration to criterion and construct validation.

    Criterion Validity


    A second major approach to determining validity is criterion validity, which has also been called empirical or predictive validity. Criterion validity is determined by comparing test scores with some sort of performance on an outside measure. The outside measure should have a theoretical relation to the variable that the test is supposed to measure. For example, an intelligence test might be correlated with grade point average; an aptitude test, with independent job ratings or general maladjustment scores, with other tests measuring similar dimensions. The relation between the two measurements is usually expressed as a correlation coefficient.

    Criterion-related validity is most frequently divided into either concurrent or predictive validity. Concurrent validity refers to measurements taken at the same, or approximately the same, time as the test. For example, an intelligence test might be administered at the same time as assessments of a group’s level of academic achievement. Predictive validity refers to outside measurements that were taken some time after the test scores were derived. Thus, predictive validity might be evaluated by correlating the intelligence test scores with measures of academic achievement a year after the initial testing. Concurrent validation is often used as a substitute for predictive validation because it is simpler, less expensive, and not as time consuming. However, the main consideration in deciding whether concurrent or predictive validation is preferable depends on the test’s purpose. Predictive validity is most appropriate for tests used for selection and classification of personnel. This may include hiring job applicants, placing military personnel in specific occupational training programs, screening out individuals who are likely to develop emotional disorders, or identifying which category of psychiatric populations would be most likely to benefit from specific treatment approaches. These situations all require that the measurement device provide a prediction of some future outcome. In contrast, concurrent validation is preferable if an assessment of the client’s current status is required, rather than a prediction of what might occur to the client at some future time. The distinction can be summarized by asking “Is Mr. Jones maladjusted?” (concurrent validity) rather than “Is Mr. Jones likely to become maladjusted at some future time?” (predictive validity).

    An important consideration is the degree to which a specific test can be applied to a unique work-related environment (see Hogan, Hogan, & Roberts, 1996). This relates more to the social value and consequences of the assessment than the formal validity as reported in the test manual (Messick, 1995). In other words, can the test under consideration provide accurate assessments and predictions for the environment in which the examinee is working? To answer this question adequately, the examiner must refer to the manual and assess the similarity between the criteria used to establish the test’s validity and the situation to which he or she would like to apply the test. For example, can an aptitude test that has adequate criterion validity in the prediction of high school grade point average also be used to predict academic achievement for a population of college students? If the examiner has questions regarding the relative applicability of the test, he or she may need to undertake a series of specific tasks. The first is to identify the required skills for adequate performance in the situation involved. For example, the criteria for a successful teacher may include such attributes as verbal fluency, flexibility, and good public speaking skills. The examiner then must determine the degree to which each skill contributes to the quality of a teacher’s performance. Next, the examiner has to assess the extent to which the test under consideration measures each of these skills. The final step is to evaluate the extent to which the attribute that the test measures are relevant to the skills the examiner needs to predict. Based on these evaluations, the examiner can estimate the confidence that he or she places in the predictions developed from the test. This approach is sometimes referred to as synthetic validity because examiners must integrate or synthesize the criteria reported in the test manual with the variables they encounter in their clinical or organizational settings.

    The strength of criterion validity depends in part on the type of variable being measured. Usually, intellectual or aptitude tests give relatively higher validity coefficients than personality tests because there are generally a greater number of variables influencing personality than intelligence. As the number of variables that influences the trait being measured increases, it becomes progressively more difficult to account for them. When a large number of variables are not accounted for, the trait can be affected in unpredictable ways. This can create a much wider degree of fluctuation in the test scores, thereby lowering the validity coefficient. Thus, when evaluating a personality test, the examiner should not expect as high a validity coefficient as for intellectual or aptitude tests. A helpful guide is to look at the validities found in similar tests and compare them with the test being considered. For example, if an examiner wants to estimate the range of validity to be expected for the extra-version scale on the Myers Briggs Type Indicator, he or she might compare it with the validities for similar scales found in the California Personality Inventory and Eysenck Personality Questionnaire. The relative level of validity, then, depends both on the quality of the construction of the test and on the variable being studied.

    An important consideration is the extent to which the test accounts for the trait being measured or the behavior being predicted. For example, the typical correlation between intelligence tests and academic performance is about .50 (Neisser et al., 1996). Because no one would say that grade point average is entirely the result of intelligence, the relative extent to which intelligence determines grade point average has to be estimated. This can be calculated by squaring the correlation coefficient and changing it into a percentage. Thus, if the correlation of .50 is squared, it comes out to 25%, indicating that 25% of academic achievement can be accounted for by IQ as measured by the intelligence test. The remaining 75% may include factors such as motivation, quality of instruction, and past educational experience. The problem facing the examiner is to determine whether 25% of the variance is sufficiently useful for the intended purposes of the test. This ultimately depends on the personal judgment of the examiner.

    The main problem confronting criterion validity is finding an agreed-on, definable, acceptable, and feasible outside criterion. Whereas for an intelligence test the grade point average might be an acceptable criterion, it is far more difficult to identify adequate criteria for most personality tests. Even with so-called intelligence tests, many researchers argue that it is more appropriate to consider them tests of scholastic aptitude rather than of intelligence. Yet another difficulty with criterion validity is the possibility that the criterion measure will be inadvertently biased. This is referred to as criterion contamination and occurs when knowledge of the test results influences an individual’s later performance. For example, a supervisor in an organization who receives such information about subordinates may act differently toward a worker placed in a certain category after being tested. This situation may set up negative or positive expectations for the worker, which could influence his or her level of performance. The result is likely to artificially alter the level of the validity coefficients. To work around these difficulties, especially in regard to personality tests, a third major method must be used to determine validity. 

    Construct Validity


    The method of construct validity was developed in part to correct the inadequacies and difficulties encountered with content and criterion approaches. Early forms of content validity relied too much on subjective judgment, while criterion validity was too restrictive in working with the domains or structure of the constructs being measured. Criterion validity had the further difficulty in that there was often a lack of agreement in deciding on adequate outside criteria. The basic approach of construct validity is to assess the extent to which the test measures a theoretical construct or trait. This assessment involves three general steps. Initially, the test constructor must make a careful analysis of the trait. This is followed by a consideration of the ways in which the trait should relate to other variables. Finally, the test designer needs to test whether these hypothesized relationships actually exist (Foster & Cone, 1995). For example, a test measuring dominance should have a high correlation with the individual accepting leadership roles and a low or negative correlation with measures of submissiveness. Likewise, a test measuring anxiety should have a high positive correlation with individuals who are measured during an anxiety-provoking situation, such as an experiment involving some sort of physical pain. As these hypothesized relationships are verified by research studies, the degree of confidence that can be placed in a test increases.

    There is no single, best approach for determining construct validity; rather, a variety of different possibilities exist. For example, if some abilities are expected to increase with age, correlations can be made between a population’s test scores and age. This may be appropriate for variables such as intelligence or motor coordination, but it would not be applicable for most personality measurements. Even in the measurement of intelligence or motor coordination, this approach may not be appropriate beyond the age of maturity. Another method for determining construct validity is to measure the effects of experimental or treatment interventions. Thus, a posttest measurement may be taken following a period of instruction to see if the intervention affected the test scores in relation to a previous pretest measure. For example, after an examinee completes a course in arithmetic, it would be predicted that scores on a test of arithmetical ability would increase. Often, correlations can be made with other tests that supposedly measure a similar variable. However, a new test that correlates too highly with existing tests may represent needless duplication unless it incorporates some additional advantage such as a shortened format, ease of administration, or superior predictive validity. Factor analysis is of particular relevance to construct validation because it can be used to identify and assess the relative strength of different psychological traits. Factor analysis can also be used in the design of a test to identify the primary factor or factors measured by a series of different tests. Thus, it can be used to simplify one or more tests by reducing the number of categories to a few common factors or traits. The factorial validity of a test is the relative weight or loading that a factor has on the test. For example, if a factor analysis of a measure of psychopathology determined that the test was composed of two clear factors that seemed to be measuring anxiety and depression, the test could be considered to have factorial validity. This would be especially true if the two factors seemed to be accounting for a clear and large portion of what the test was measuring.

    Another method used in construct validity is to estimate the degree of internal consistency by correlating specific subtests with the test’s total score. For example, if a subtest on an intelligence test does not correlate adequately with the overall or Full Scale IQ, it should be either eliminated or altered in a way that increases the correlation. A final method for obtaining construct validity is for a test to converge or correlate highly with variables that are theoretically similar to it. The test should not only show this convergent validity but also have discriminate validity, in which it would demonstrate low or negative correlations with variables that are dissimilar to it. Thus, scores on reading comprehension should show high positive correlations with performance in a literature class and low correlations with performance in a class involving mathematical computation.

    Related to discriminant and convergent validity is the degree of sensitivity and specificity an assessment device demonstrates in identifying different categories. Sensitivity refers to the percentage of true positives that the instrument has identified, whereas specificity is the relative percentage of true negatives. A structured clinical interview might be quite sensitive in that it would accurately identify 90% of schizophrenics in an admitting ward of a hospital. However, it may not be sufficiently specific in that 30% of schizophrenics would be incorrectly classified as either normal or having some other diagnosis. The difficulty in determining sensitivity and specificity lies in developing agreed-on, objectively accurate outside criteria for categories such as psychiatric diagnosis, intelligence, or personality traits.

    As indicated by the variety of approaches discussed, no single, quick, efficient method exists for determining construct validity. It is similar to testing a series of hypotheses in which the results of the studies determine the meanings that can be attached to later test scores (Foster & Cone, 1995; Messick, 1995). Almost any data can be used, including material from the content and criterion approaches. The greater the amount of supporting data, the greater is the level of confidence with which the test can be used. In many ways, construct validity represents the strongest and most sophisticated approach to test construction. In many ways, all types of validity can be considered as subcategories of construct validity. It involves theoretical knowledge of the trait or ability being measured, knowledge of other related variables, hypothesis testing, and statements regarding the relationship of the test variable to a network of other variables that have been investigated. Thus, construct validation is a never-ending process in which new relationships always can be verified and investigated.


  • Reliability: Definition, Methods, and Example

    Reliability: Definition, Methods, and Example

    Uncover the true definition of reliability. Understand why reliability is crucial for machines, systems, and test results to perform consistently and accurately. What is Reliability? The quality of being trustworthy or performing consistently well. The degree to which the result of a measurement, calculation, or specification can depend on to be accurate.

    Here expiration of Reliability with their topic Definition, Methods, and Example.

    Definition of Reliability? The ability of an apparatus, machine, or system to consistently perform its intended or required function or mission, on-demand, and without degradation or failure.

    Manufacturing: The probability of failure-free performance over an item’s useful life, or a specified time-frame, under specified environmental and duty-cycle conditions. Often expressed as mean time between failures (MTBF) or reliability coefficient. Also called quality over time.

    Consistency and validity of test results determined through statistical methods after repeated trials.

    The reliability of a test refers to its degree of stability, consistency, predictability, and accuracy. It addresses the extent to which scores obtained by a person are the same if the person is reexamined by the same test on different occasions. Underlying the concept of reliability is the possible range of error, or error of measurement, of a single score.

    This is an estimate of the range of possible random fluctuation that can expect in an individual’s? score. It should stress; however, that a certain degree of error or noise is always present in the system; from such factors as a misreading of the items, poor administration procedures; or the changing mood of the client. If there is a large degree of random fluctuation; the examiner cannot place a great deal of confidence in an individual’s scores.

    Testing in Trials:

    The goal of a test constructor is to reduce, as much as possible; the degree of measurement error, or random fluctuation. If this is achieved, the difference between one score and another for a measured characteristic is more likely to result from some true difference than from some chance fluctuation. Two main issues related to the degree of error in a test. The first is the inevitable, natural variation in human performance.

    Usually, the variability is less for measurements of ability than for those of personality. Whereas ability variables (intelligence, mechanical aptitude, etc.) show gradual changes resulting from growth and development; many personality traits are much more highly dependent on factors such as mood. This is particularly true in the case of a characteristic such as anxiety.

    The practical significance of this in evaluating a test is that certain factors outside the test itself can serve to reduce the reliability that the test can realistically expect to achieve. Thus, an examiner should generally expect higher reliabilities for an intelligence test than for a test measuring a personality variable such as anxiety. It is the examiner’s responsibility to know what being measure; especially the degree of variability to expect in the measured trait.

    The second important issue relating to reliability is that psychological testing methods are necessarily imprecise. For the hard sciences, researchers can make direct measurements such as the concentration of a chemical solution; the relative weight of one organism compared with another, or the strength of radiation. In contrast, many constructs in psychology are often measured indirectly.

    For example;

    Intelligence cannot perceive directly; it must infer by measuring behavior that has been defined as being intelligent. Variability relating to these inferences is likely to produce a certain degree of error resulting from the lack of precision in defining and observing inner psychological constructs. Variability in measurement also occurs simply; because people have true (not because of test error) fluctuations in performance between one testing session and the next.

    Whereas it is impossible to control for the natural variability in human performance; adequate test construction can attempt to reduce the imprecision that is a function of the test itself. Natural human variability and test imprecision make the task of measurement extremely difficult. Although some error in testing is inevitable; the goal of test construction is to keep testing errors within reasonably accepted limits.

    A high correlation is generally .80 or more, but the variable being measured also changes the expected strength of the correlation. Likewise, the method of determining reliability alters the relative strength of the correlation. Ideally, clinicians should hope for correlations of .90 or higher in tests that are used to make decisions about individuals, whereas a correlation of .70 or more is generally adequate for research purposes.

    Methods of reliability:

    The purpose of reliability is to estimate the degree of test variance caused by the error. The four primary methods of obtaining reliability involve determining;

    • The extent to which the test produces consistent results on retesting (test-retest).
    • The relative accuracy of a test at a given time (alternate forms).
    • Internal consistency of the items (split half), and.
    • Degree of agreement between two examiners (inter-scorer).

    Another way to summarize this is that reliability can be time to time (test-retest), form to form (alternate forms), item to item (split half), or scorer to scorer (inter-scorer). Although these are the main types of reliability, there is a fifth type, the Kuder-Richardson; like the split-half, it is a measurement of the internal consistency of the test items. However, because this method is considered appropriate only for tests that are relatively pure measures of a single variable, it does not cover in this book. 

    Test-Retest Reliability:

    Test-retest reliability is determined by administering the test and then repeating it on a second occasion. The reliability coefficient is calculated by correlating the scores obtained by the same person on the two different administrations. The degree of correlation between the two scores indicates the extent to which the test scores can generalize from one situation to the next.

    If the correlations are high, the results are less likely to cause by random fluctuations in the condition of the examinee or the testing environment. Thus, when the test is being used in actual practice; the examiner can be relatively confident that differences in scores are the result of an actual change in the trait being measured rather than random fluctuation.

    Several factors must consider in assessing the appropriateness of test-retest reliability. One is that the interval between administrations can affect reliability. Thus, a test manual should specify the interval as well as any significant life changes that the examinees may have experienced such as counseling, career changes, or psychotherapy.

    For example;

    Tests of preschool intelligence often give reasonably high correlations if the second administration is within several months of the first one. However, correlations with later childhood or adult IQ are generally low because of innumerable intervening life changes. One of the major difficulties with test-retest reliability is the effect that practice and memory may have on performance; which can produce improvement between one administration and the next.

    This is a particular problem for speeded and memory tests such as those found on the Digit Symbol and Arithmetic sub-tests of the WAIS-III. Additional sources of variation may be the result of random, short-term fluctuations in the examinee, or variations in the testing conditions. In general, test-retest reliability is the preferred method only if the variable being measured is relatively stable. If the variable is highly changeable (e.g., anxiety), this method is usually not adequate. 

    Alternate Forms:

    The alternate forms method avoids many of the problems encountered with test-retest reliability. The logic behind alternate forms is that; if the trait measures several times on the same individual by using parallel forms of the test; the different measurements should produce similar results. The degree of similarity between the scores represents the reliability coefficient of the test.

    As in the test-retest method, the interval between administrations should always include in the manual as well as a description of any significant intervening life experiences. If the second administration gave immediately after the first; the resulting reliability is more a measure of the correlation between forms and not across occasions.

    More things:

    Correlations determined by tests given with a wide interval; such as two months or more provide a measure of both the relation between forms and the degree of temporal stability. The alternate forms method eliminates many carryover effects; such as the recall of previous responses the examinee has made to specific items.

    However, there is still likely to be some carryover effect in that the examinee can learn to adapt to the overall style of the test even when the specific item content between one test and another is unfamiliar. This is most likely when the test involves some sort of problem-solving strategy in which the same principle in solving one problem can use to solve the next one.

    An examinee, for example, may learn to use mnemonic aids to increase his or her performance on an alternate form of the WAIS-III Digit Symbol subtest. Perhaps the primary difficulty with alternate forms lies in determining whether the two forms are equivalent.

    For example;

    If one test is more difficult than its alternate form, the difference in scores may represent actual differences in the two tests rather than differences resulting from the unreliability of the measure. Because the test constructor is attempting to measure the reliability of the test itself and not the differences between the tests, this could confound and lower the reliability coefficient.

    Alternate forms should independently construct tests that use the same specifications, including the same number of items, type of content, format, and manner of administration. A final difficulty encounters primarily when there is a delay between one administration and the next. With such a delay, the examinee may perform differently because of short-term fluctuations such as mood, stress level, or the relative quality of the previous night’s sleep.

    Thus, an examinee’s abilities may vary somewhat from one examination to another, thereby affecting test results. Despite these problems, alternate forms reliability has the advantage of at least reducing, if not eliminating, any carryover effects of the test-retest method. A further advantage is that the alternate test forms can be useful for other purposes, such as assessing the effects of a treatment program or monitoring a patient’s changes over time by administering the different forms on separate occasions. 

    Split Half Reliability:

    The split-half method is the best technique for determining reliability for a trait with a high degree of fluctuation. Because the test given only once, the items are split in half, and the two halves correlate. As there is only one administration, the effects of time can’t intervene as they might with the test-retest method.

    Thus, the split-half method gives a measure of the internal consistency of the test items rather than the temporal stability of different administrations of the same test. To determine split-half reliability, the test often split based on odd and even items. This method is usually adequate for most tests. Dividing the test into a first half and second half can be effective in some cases; but is often inappropriate because of the cumulative effects of warming up fatigue, and boredom; all of which can result in different levels of performance on the first half of the test compared with the second.

    As is true with the other methods of obtaining reliability; the split-half method has limitations. When a test is split in half; there are fewer items on each half; which results in wider variability because the individual responses cannot stabilize as easily around a mean. As a general principle, the longer a test is; the more reliable it is because the larger the number of items; the easier it is for the majority of items to compensate for minor alterations in responding to a few of the other items. As with the alternate forms method; differences in the content may exist between one half and another.

    Inter-scorer Reliability:

    In some tests, scoring is based partially on the judgment of the examiner. Because judgment may vary between one scorer and the next; it may be important to assess the extent to which reliability might affect. This is especially true for projects and even for some ability tests where hard scorers may produce results somewhat different from easy scorers.

    This variance in interscorer reliability may apply for global judgments based on test scores such as brain injury versus normal; or, for small details of scoring such as whether a person has given a shading versus a texture response on the Rorschach. The basic strategy for determining interscorer reliability is to obtain a series of responses from a single client and to have these responses scored by two different individuals.

    A variation is to have two different examiners test the same client using the same test; and, then to determine how close their scores or ratings of the person are. The two sets of scores can then correlate to determine a reliability coefficient. Any test that requires even partial subjectivity in scoring should provide information on interscorer reliability.

    The best form of reliability is dependent on both the nature of the variable being measured; and, the purposes for which the test uses. If the trait or ability being measured is highly stable; the test-retest method is preferable; whereas split half is more appropriate for characteristics that are highly subject to fluctuations. When using a test to make predictions, the test-retest method is preferable; because it gives an estimate of the dependability of the test from one administration to the next.

    More things:

    This is particularly true if, when determining reliability; an increased time interval existed between the two administrations. If, on the other hand, the examiner is concerned with the internal consistency and accuracy of a test for a single, one-time measure, either the split-half of the alternative forms would be best.

    Another consideration in evaluating the acceptable range of reliability is the format of the test. Longer tests usually have higher reliabilities than shorter ones. Also, the format of the responses affects reliability. For example, a true-false format is likely to have lower reliability than multiple choice because each true-false item has a 50% possibility of the answer being correct by chance.

    In contrast, each question in a multiple-choice format having five possible choices has only a 20% possibility of being correct by chance. A final consideration is that tests with various subtests or subscales should report the reliability for the overall test as well as for each of the subtests. In general, the overall test score has significantly higher reliability than its subtests. In estimating the confidence with which test scores can interpret; the examiner should take into account the lower reliabilities of the subtests.

    1] For example;

    A Full-Scale IQ on the WAIS-III can interpret with more confidence than the specific subscale scores. Most test manuals include a statistical index of the amount of error that can expect test scores; which refers to the standard error of measurement (SEM). The logic behind the SEM is that test scores consist of both truth and error.

    Thus, there is always noise or error in the system, and the SEM provides a range to indicate how extensive that error is likely to be. The range depends on the test’s reliability so that the higher the reliability, the narrower the range of error. The SEM is a standard deviation score so that, for example, an SEM of 3 on an intelligence test would indicate that an individual’s score has a 68% chance of being ± 3 IQ points from the estimated true score.

    Result of Score:

    This is because the SEM of 3 represents a band extending from -1 to +1 standard deviations above and below the mean. Likewise, there would be a 95% chance that the individual’s score would fall within a range of ± 5 points from the estimated true score. From a theoretical perspective, the SEM is a statistical index of how a person’s repeat scores on a specific test would fall around a normal distribution.

    Thus, it is a statement of the relationship among a person’s obtain score; his or her theoretically true score, and the test reliability. Because it is an empirical statement of the probable range of scores; the SEM has more practical usefulness than a knowledge of the test reliability. This band of error also refer to as a confidence interval.

    The acceptable range of reliability is difficult to identify and depends partially on the variable being measured. In general; unstable aspects (states) of the person produce lower reliabilities than stable ones (traits). Thus, in evaluating a test, the examiner should expect higher reliabilities on stable traits or abilities than on changeable states.

    2] For example;

    A person’s general fund of vocabulary words is highly stable and therefore produces high reliabilities. In contrast, a person’s level of anxiety is often highly changeable. This means examiners should not expect nearly as high reliabilities for anxiety as for an ability measure such as vocabulary. Further consideration also related to the stability of the trait; or, the ability is the method of reliability that uses.

    Alternate forms consider giving the lowest estimate of the actual reliability of a test; while split-half provides the highest estimate. Another important way to estimate the adequacy of reliability is by comparing the reliability derived on other similar tests. The examiner can then develop a sense of the expected levels of reliability, which provides a baseline for comparisons.

    Result of example;

    In the example of anxiety, a clinician may not know what is an acceptable level of reliability. A general estimate can make by comparing the reliability of the test under consideration with other tests measuring the same or a similar variable. The most important thing to keep in mind is that lower levels of reliability usually suggest that less confidence can place in the interpretations and predictions based on the test data.

    However, clinical practitioners are less likely to concern with low statistical reliability; if they have some basis for believing the test is a valid measure of the client’s state at the time of testing. The main consideration is that the sign or test score does not mean one thing at one time and something different at another.

  • What are Effects of Goal Orientation on Student Achievement?

    What are Effects of Goal Orientation on Student Achievement?


    The extent to which students have a learning or performance goal orientation is associated with a variety of student behaviors and beliefs. These have been divided into cognitive strategies and engagement and motivational beliefs and actions.

    Cognitive Strategies and Engagement

    Learning goals foster cognitive engagement and effort (Meece, Blumenfeld, & Hoyle, 1988). Fifth- and sixth-grade science students who placed greater emphasis on learning goals also reported more active cognitive engagement. Students with performance goals (pleasing the teacher or seeking social recognition) had a lower level of cognitive engagement. Wolters, Yu, and Pintrich (1996) found that task value and interest were related to learning goals. The use of cognitive strategies and information processing is related to goal orientations of students at different levels of schooling. Learning that is potentially more meaningful or complex, requiring deep-level processing, appears to be the most vulnerable to the negative effects of performance goals (Graham & Golan, 1991). When the emphasis was on ability, as in the performance goal situation, there was interference with memory for tasks that required a great deal of cognitive effort. Performance goals also undermined the problem-solving strategies of children (Elliott & Dweck, 1988). In contrast, learning goals were the strongest predictor of seventh- and eighth-grade students’ cognitive strategy use (Wolters et al., 1996). These goals were also predictive of deep processing, persistence, effort, and exam performance of college students (Elliot, McGregor, & Gable, 1999).

    Motivational Beliefs and Actions

    The particular goal orientation affects motivation beliefs such as the role of effort in learning, self-efficacy beliefs, the tendency to use self-handicapping strategies, help seeking, and helpless patterns.

    Self-Efficacy: A learning goal orientation was generally found to be associated with higher self-efficacy. Wolters et al. (1996) reported that seventh- and eighth-grade students who reported greater endorsement of a learning goal also tended to report higher levels of self-efficacy. Learning goals were also positively related to self-efficacy in the subjects of writing and science (Pajares, Britner, & Valiante, 2000). In contrast, performance goals were related to low self-efficacy (Pintrich, Zusho, Schiefele, & Pekrun, 2001).

    Self-Handicapping: Self-handicapping strategies, such as low effort, are associated with performance goals (Midgley & Urden, 2001). Elliott and Dweck (1988) found that children with performance goals were more likely to avoid challenge and exhibit low persistence. These strategies undermine student achievement. Another type of self-handicapping strategy associated with performance goals is cheating (Anderman, Griesinger, & Westerfield, 1998). The authors explained that, by cheating, not only do students protect themselves against perceptions of low ability, they improve their grades.

    Help Seeking: The particular goal orientation was also found to influence help-seeking behaviors (Butler & Neuman, 1995). Second- and sixth-grade students were more likely to seek help when the task was presented to them as an opportunity to develop competence. When tasks were presented to students as a measure of their ability, they were less likely to seek help. Students were more likely to seek help in classrooms with a learning goal focus and to avoid help seeking in a performance goal structure (Butler & Neuman, 1995; Ryan, Gheen, & Midgley, 1998).

    Helpless Patterns: Finally, one of the most debilitating effects of performance goals is the vulnerability to helpless patterns (Dweck, 1986). Goals that focus students on using performances to judge their ability can make them vulnerable to a helpless pattern in the face of failure (Dweck & Sorich, 1999; Heyman & Dweck, 1992; Midgley et al., 2001).

    In conclusion, performance goal beliefs are generally seen as the most maladaptive pattern as students are more extrinsically motivated, focused on outcome and not on learning (C. Ames, 1992), and focused on being superior to others (Nicholls, 1990). At the same time, there is continued agreement that the learning goal pattern is the more adaptive one, fostering long-term achievement that reflects intrinsic motivation (C. Ames, 1992; Heyman & Dweck, 1992; Kaplan & Middleton, 2002; Meece, 1991; Midgley et al., 2001). As Kaplan and Middleton asked, “Should childhood be a journey or a race?”


  • DNA Structure

    What is DNA Structure?


    What do a human, a rose, and a bacterium have in common? Each of these things along with every other organism on Earth contains the molecular instructions for life, called deoxyribonucleic acid or DNA. Encoded within this DNA (Deoxyribonucleic Acid) are the directions for traits as diverse as the color of a person’s eyes, the scent of a rose, and the way in which bacteria infect a lung cell.

    DNA is found in nearly all living cells. However, its exact location within a cell depends on whether that cell possesses a special membrane-bound organelle called a nucleus. Organisms composed of cells that contain nuclei are classified as eukaryotes, whereas organisms composed of cells that lack nuclei are classified as prokaryotes. In eukaryotes, DNA is housed within the nucleus, but in prokaryotes, DNA is located directly within the cellular cytoplasm, as there is no nucleus available.

    But what, exactly, is DNA? In short, DNA is a complex molecule that consists of many components, a portion of which are passed from parent organisms to their offspring during the process of reproduction. Although each organism’s DNA is unique, all DNA is composed of the same nitrogen-based molecules. So how does DNA differ from organism to organism? It is simply the order in which these smaller molecules are arranged that differs among individuals. In turn, this pattern of arrangement ultimately determines each organism’s unique characteristics, thanks to another set of molecules that “read” the pattern and stimulate the chemical and physical processes it calls for.

    DNA is made up of molecules called nucleotides. Each nucleotide contains a phosphate group, a sugar group, and a nitrogen base. The four types of nitrogen bases are adenine (A), thymine (T), guanine (G) and cytosine (C). The order of these bases is what determines DNA’s instructions, or genetic code. Similar to the way the order of letters in the alphabet can be used to form a word, the order of nitrogen bases in a DNA sequence forms genes, which in the language of the cell, tells cells how to make proteins. Another type of nucleic acid, ribonucleic acid, or RNA, translates genetic information from DNA into proteins.

    The entire human genome contains about 3 billion bases and about 20,000 genes. Nucleotides are attached together to form two long strands that spiral to create a structure called a double helix. If you think of the double helix structure as a ladder, the phosphate and sugar molecules would be the sides, while the bases would be the rungs. The bases on one strand pair with the bases on another strand: adenine pairs with thymine, and guanine pairs with cytosine.

    DNA molecules are long so long, in fact, that they can’t fit into cells without the right packaging. To fit inside cells, DNA is coiled tightly to form structures we call chromosomes. Each chromosome contains a single DNA molecule. Humans have 23 pairs of chromosomes, which are found inside the cell’s nucleus.

    Why does a DNA molecule consist of two strands? The primary function of DNA is to store and transmit genetic information. To accomplish this function DNA must have two properties. It must be chemically stable so as to reduce the possibility of damage. DNA must also be capable of copying the information it contains. The two-stranded structure of DNA gives it both of these properties. The nucleotide sequence contains the information found in DNA. The nucleotides connect the two strands through hydrogen bonds. Because each nucleotide has a unique complementary nucleotide, each strand contains all the information required to synthesize a new DNA molecule. The double-stranded structure also makes the molecule more stable.

    DNA is the information molecule. It stores instructions for making other large molecules, called proteins. These instructions are stored inside each of your cells, distributed among 46 long structures called chromosomes. These chromosomes are made up of thousands of shorter segments of DNA, called genes. Each gene stores the directions for making protein fragments, whole proteins, or multiple specific proteins.

    DNA is well-suited to perform this biological function because of its molecular structure, and because of the development of a series of high-performance enzymes that are fine-tuned to interact with this molecular structure in specific ways. The match between DNA structure and the activities of these enzymes is so effective and well-refined that DNA has become, over evolutionary time, the universal information-storage molecule for all forms of life. Nature has yet to find a better solution than DNA for storing, expressing, and passing along instructions for making proteins.

    Alternative DNA structures


    DNA Structure A-DNA B-DNA and Z-DNA
    DNA Structure A-DNA B-DNA and Z-DNA

    DNA (Deoxyribonucleic Acid) exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.

    The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA. An alternative analysis was then proposed by Wilkins et al., in 1953, for the in vivo B-DNA X-ray diffraction scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix.

    Although the B-DNA form is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular para crystals with a significant degree of disorder.

    Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.