M.Tech U-I Atomic and molecular basics

INTRO TO NANO-0

 INTRO TO NANO-3

VSEPR

Hybridization

Intermolecular and intramolecular forces-1

intermolecular-forces-3

SYNTHESIS-3

Unit-I Atomic and Molecular Basics: The scope, The nanoscale systems, Defining nano dimensional materials, Size effects in nano materials, Application and technology development, General methods available for the synthesis of nano dimensional materials.

Particles and Bonds, Chemical bonds in Nano technology, The shapes of molecules, additional aspects of bonding, Molecular geometry: VSEPR Model, Hybridization, Van der Waals interactions, Dipole–Dipole Interactions, Ionic Interactions, Metal bonds, Covalent bonds, Coordinative bonds, Hydrogen bridge bonds and polyvalent bonds.

Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.

Photo of Richard Feynman
Physicist Richard Feynman, the father of nanotechnology.

Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering.

The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s Plenty of Room at the Bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn’t until 1981, with the development of the scanning tunneling microscope that could “see” individual atoms, that modern nanotechnology began.

Medieval stained glass window courtesy of NanoBioNet
Medieval stained glass windows are an example of  how nanotechnology was used in the pre-modern era. (Courtesy: NanoBioNet)

It’s hard to imagine just how small nanotechnology is. One nanometer is a billionth of a meter, or 10-9 of a meter. Here are a few illustrative examples:

  • There are 25,400,000 nanometers in an inch
  • A sheet of newspaper is about 100,000 nanometers thick
  • On a comparative scale, if a marble were a nanometer, then one meter would be the size of the Earth

Nanoscience and nanotechnology involve the ability to see and to control individual atoms and molecules. Everything on Earth is made up of atoms—the food we eat, the clothes we wear, the buildings and houses we live in, and our own bodies.

But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school science classes. The microscopes needed to see things at the nanoscale were invented relatively recently—about 30 years ago.

Once scientists had the right tools, such as the scanning tunneling microscope (STM) and the atomic force microscope (AFM), the age of nanotechnology was born.

Although modern nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries. Alternate-sized gold and silver particles created colors in the stained glass windows of medieval churches hundreds of years ago. The artists back then just didn’t know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with.

Today’s scientists and engineers are finding a wide variety of ways to deliberately make materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight, increased control of light spectrum, and greater chemical reactivity than their larger-scale counterparts.

What is Nanotechnology?

 Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced.In its original sense, ‘nanotechnology’ refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.  

With 15,342 atoms, this parallel-shaft speed reducer gear is one of the largest nanomechanical devices ever modeled in atomic detail. LINK

The Meaning of Nanotechnology

When K. Eric Drexler (right) popularized the word ‘nanotechnology’ in the 1980’s, he was talking about building machines on the scale of molecules, a few nanometers wide—motors, robot arms, and even whole computers, far smaller than a cell. Drexler spent the next ten years describing and analyzing these incredible devices, and responding to accusations of science fiction. Meanwhile, mundane technology was developing the ability to build simple structures on a molecular scale. As nanotechnology became an accepted concept, the meaning of the word shifted to encompass the simpler kinds of nanometer-scale technology. The U.S. National Nanotechnology Initiative was created to fund this kind of nanotech: their definition includes anything smaller than 100 nanometers with novel properties.

Much of the work being done today that carries the name ‘nanotechnology’ is not nanotechnology in the original meaning of the word. Nanotechnology, in its traditional sense, means building things from the bottom up, with atomic precision. This theoretical capability was envisioned as early as 1959 by the renowned physicist Richard Feynman.

I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously. . . The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big. — Richard Feynman, Nobel Prize winner in physics

Based on Feynman’s vision of miniature factories using nanomachines to build complex products, advanced nanotechnology (sometimes referred to as molecular manufacturing) will make use of positionally-controlled mechanochemistry guided by molecular machine systems. Formulating a roadmap for development of this kind of nanotechnology is now an objective of a broadly basedtechnology roadmap project led by Battelle (the manager of several U.S. National Laboratories) and the Foresight Nanotech Institute.

Shortly after this envisioned molecular machinery is created, it will result in a manufacturing revolution, probably causing severe disruption. It also has serious economic, social, environmental, and military implications.

Four Generations

Mihail (Mike) Roco of the U.S. National Nanotechnology Initiative has described four generations of nanotechnology development (see chart below). The current era, as Roco depicts it, is that of passive nanostructures, materials designed to perform one task. The second phase, which we are just entering, introduces active nanostructures for multitasking; for example, actuators, drug delivery devices, and sensors. The third generation is expected to begin emerging around 2010 and will feature nanosystems with thousands of interacting components. A few years after that, the first integrated nanosystems, functioning (according to Roco) much like a mammalian cell with hierarchical systems within systems, are expected to be developed.

Some experts may still insist that nanotechnology can refer to measurement or visualization at the scale of 1-100 nanometers, but a consensus seems to be forming around the idea (put forward by the NNI’s Mike Roco) that control and restructuring of matter at the nanoscale is a necessary element. CRN’s definition is a bit more precise than that, but as work progresses through the four generations of nanotechnology leading up to molecular nanosystems, which will include molecular manufacturing, we think it will become increasingly obvious that “engineering of functional systems at the molecular scale” is what nanotech is really all about.

Conflicting Definitions

Unfortunately, conflicting definitions of nanotechnology and blurry distinctions between significantly different fields have complicated the effort to understand the differences and develop sensible, effective policy.

The risks of today’s nanoscale technologies (nanoparticle toxicity, etc.) cannot be treated the same as the risks of longer-term molecular manufacturing (economic disruption, unstable arms race, etc.). It is a mistake to put them together in one basket for policy consideration—each is important to address, but they offer different problems and will require different solutions. As used today, the term nanotechnology usually refers to a broad collection of mostly disconnected fields. Essentially, anything sufficiently small and interesting can be called nanotechnology. Much of it is harmless. For the rest, much of the harm is of familiar and limited quality. But as we will see, molecular manufacturing will bring unfamiliar risks and new classes of problems.

General-Purpose Technology

Nanotechnology is sometimes referred to as a general-purpose technology. That’s because in its advanced form it will have significant impact on almost all industries and all areas of society. It will offer better built, longer lasting, cleaner, safer, and smarter products for the home, for communications, for medicine, for transportation, for agriculture, and for industry in general.

Imagine a medical device that travels through the human body to seek out and destroy small clusters of cancerous cells before they can spread. Or a box no larger than a sugar cube that contains the entire contents of the Library of Congress. Or materials much lighter than steel that possess ten times as much strength. — U.S. National Science Foundation

Dual-Use Technology

Like electricity or computers before it, nanotech will offer greatly improved efficiency in almost every facet of life. But as a general-purpose technology, it will be dual-use, meaning it will have many commercial uses and it also will have many military uses—making far more powerful weapons and tools of surveillance. Thus it represents not only wonderful benefits for humanity, but also grave risks.

A key understanding of nanotechnology is that it offers not just better products, but a vastly improved manufacturing process. A computer can make copies of data files—essentially as many copies as you want at little or no cost. It may be only a matter of time until the building of products becomes as cheap as the copying of files. That’s the real meaning of nanotechnology, and why it is sometimes seen as “the next industrial revolution.”

My own judgment is that the nanotechnology revolution has the potential to change America on a scale equal to, if not greater than, the computer revolution. — U.S. Senator Ron Wyden (D-Ore.)

The power of nanotechnology can be encapsulated in an apparently simple device called a personal nanofactory that may sit on your countertop or desktop. Packed with miniature chemical processors, computing, and robotics, it will produce a wide-range of items quickly, cleanly, and inexpensively, building products directly from blueprints.

Exponential Proliferation

Nanotechnology not only will allow making many high-quality products at very low cost, but it will allow making new nanofactories at the same low cost and at the same rapid speed. This unique (outside of biology, that is) ability to reproduce its own means of production is why nanotech is said to be an exponential technology. It represents a manufacturing system that will be able to make more manufacturing systems—factories that can build factories—rapidly, cheaply, and cleanly. The means of production will be able to reproduce exponentially, so in just a few weeks a few nanofactories conceivably could become billions. It is a revolutionary, transformative, powerful, and potentially very dangerous—or beneficial—technology.

How soon will all this come about? Conservative estimates usually say 20 to 30 years from now, or even much later than that. However, CRN is concerned that it may occur sooner, quite possibly within the next decade. This is because of the rapid progress being made in enabling technologies, such as optics, nanolithography, mechanochemistry and 3D prototyping. If it does arrive that soon, we may not be adequately prepared, and the consequences could be severe.

We believe it’s not too early to begin asking some tough questions and facing the issues:

Who will own the technology?
Will it be heavily restricted, or widely available?
What will it do to the gap between rich and poor?
How can dangerous weapons be controlled, and perilous arms races be prevented?

Many of these questions were first raised over a decade ago, and have not yet been answered. If the questions are not answered with deliberation, answers will evolve independently and will take us by surprise; the surprise is likely to be unpleasant.

It is difficult to say for sure how soon this technology will mature, partly because it’s possible (especially in countries that do not have open societies) that clandestine military or industrial development programs have been going on for years without our knowledge.

We cannot say with certainty that full-scale nanotechnology will not be developed with the next ten years, or even five years. It may take longer than that, but prudence—and possibly our survival—demands that we prepare now for the earliest plausible development scenario.

 

 

 

 Nanoscale

Nanoscale particles are not new in either nature or science. However, the recent leaps in areas such as microscopy have given scientists new tools to understand and take advantage of phenomena that occur naturally when matter is organized at the nanoscale. In essence, these phenomena are based on “quantum effects“ and other simple physical effects such as expanded surface area (more on these below). In addition, the fact that a majority of biological processes occur at the nanoscale gives scientists models and templates to imagine and construct new processes that can enhance their work in medicine, imaging, computing, printing, chemical catalysis, materials synthesis, and many other fields. Nanotechnology is not simply working at ever smaller dimensions; rather, working at the nanoscale enables scientists to utilize the unique physical, chemical, mechanical, and optical properties of materials that naturally occur at that scale.

Nanowires
Computer simulation of electron motions within a nanowire that has a diameter in the nanoscale  range.

When particle sizes of solid matter in the visible scale are compared to what can be seen in a regular optical microscope, there is little difference in the properties of the particles. But when particles are created with dimensions of about 1–100 nanometers (where the particles can be “seen” only with powerful specialized microscopes), the materials’ properties change significantly from those at larger scales. This is the size scale where so-called quantum effects rule the behavior and properties of particles. Properties of materials are size-dependent in this scale range. Thus, when particle size is made to be nanoscale, properties such as melting point, fluorescence, electrical conductivity, magnetic permeability, and chemical reactivity change as a function of the size of the particle.

Nanoscale gold illustrates the unique properties that occur at the nanoscale. Nanoscale gold particles are not the yellow color with which we are familiar; nanoscale gold can appear red or purple. At the nanoscale, the motion of the gold’s electrons is confined. Because this movement is restricted, gold nanoparticles react differently with light compared to larger-scale gold particles. Their size and optical properties can be put to practical use: nanoscale gold particles selectively accumulate in tumors, where they can enable both precise imaging and targeted laser destruction of the tumor by means that avoid harming healthy cells.

A fascinating and powerful result of the quantum effects of the nanoscale is the concept of “tunability” of properties. That is, by changing the size of the particle, a scientist can literally fine-tune a material property of interest (e.g., changing fluorescence color; in turn, the fluorescence color of a particle can be used to identify the particle, and various materials can be “labeled” with fluorescent markers for various purposes). Another potent quantum effect of the nanoscale is known as“tunneling,” which is a phenomenon that enables the scanning tunneling microscope and flash memory for computing.

Over millennia, nature has perfected the art of biology at the nanoscale. Many of the inner workings of cells naturally occur at the nanoscale. For example, hemoglobin, the protein that carries oxygen through the body, is 5.5 nanometers in diameter. A strand of DNA, one of the building blocks of human life, is only about 2 nanometers in diameter.

Drawing on the natural nanoscale of biology, many medical researchers are working on designing tools, treatments, and therapies that are more precise and personalized than conventional ones—and that can be applied earlier in the course of a disease and lead to fewer adverse side-effects. One medical example of nanotechnology is the bio-barcode assay, a relatively low-cost method of detecting disease-specific biomarkers in the blood, even when there are very few of them in a sample. The basic process, which attaches “recognition” particles and DNA “amplifiers” to gold nanoparticles, was originally demonstrated at Northwestern University for a prostate cancer biomarker following prostatectomy. The bio-barcode assay has proven to be considerably more sensitive than conventional assays for the same target biomarkers, and it can be adapted to detect almost any molecular target.i

Growing understanding of nanoscale biomolecular structures is impacting other fields than medicine. Some scientists are looking at ways to use nanoscale biological principles of molecular self-assembly, self-organization, and quantum mechanics to create novel computing platforms. Other researchers have discovered that in photosynthesis, the energy that plants harvest from sunlight is nearly instantly transferred to plant “reaction centers” by quantum mechanical processes with nearly 100% efficiency (little energy wasted as heat). They are investigating photosynthesis as a model for “green energy” nanosystems for inexpensive production and storage of nonpolluting solar power.ii

Nanoscale materials have far larger surface areas than similar masses of larger-scale materials. As surface area per mass of a material increases, a greater amount of the material can come into contact with surrounding materials, thus affecting reactivity.

A simple thought experiment shows why nanoparticles have phenomenally high surface areas. A solid cube of a material 1 cm on a side has 6 square centimeters of surface area, about equal to one side of half a stick of gum. But if that volume of 1 cubic centimeter were filled with cubes 1 mm on a side, that would be 1,000 millimeter-sized cubes (10 x 10 x 10), each one of which has a surface area of 6 square millimeters, for a total surface area of 60 square centimeters—about the same as one side of two-thirds of a 3” x 5” note card. When the 1 cubic centimeter is filled with micrometer-sized cubes—a trillion (1012) of them, each with a surface area of 6 square micrometers—the total surface area amounts to 6 square meters, or about the area of the main bathroom in an average house. And when that single cubic centimeter of volume is filled with 1-nanometer-sized cubes—1021 of them, each with an area of 6 square nanometers—their total surface area comes to 6,000 square meters. In other words, a single cubic centimeter of cubic nanoparticles has a total surface area one-third larger than a football field!

Nanocubes
Illustration demonstrating the effect of the increased surface area provided by nanostructured materials

One benefit of greater surface area—and improved reactivity—in nanostructured materials is that they have helped create better catalysts. As a result, catalysis by engineered nanostructured materials already impacts about one-third of the huge U.S.—and global—catalyst markets, affecting billions of dollars of revenue in the oil and chemical industries.iii An everyday example of catalysis is the catalytic converter in a car, which reduces the toxicity of the engine’s fumes. Nanoengineered batteries, fuel cells, and catalysts can potentially use enhanced reactivity at the nanoscale to produce cleaner, safer, and more affordable modes of producing and storing energy.

Large surface area also makes nanostructured membranes and materials ideal candidates for water treatment and desalination (e.g., see “Self-Assembled, Nanostructured Carbon for Energy Storage and Water Treatment” in our database,NNI Accomplishments Archive), among other uses. It also helps support “functionalization” of nanoscale material surfaces (adding particles for specific purposes), for applications ranging from drug delivery to clothing insulation.

Synthesis of Nanomaterials

It is classified as bottom-up manufacturing which involves building up of the atom or molecular constituents as against the top method which involves making smaller and smaller structures through etching from the bulk material as exemplified by the semiconductor industry.

Gas Condensation

Gas condensation was the first technique used to synthesize nanocrystalline metals and alloys. In this technique, a metallic or inorganic material is vaporized using thermal evaporation sources such as a Joule heated refractory crucibles, electron beam evaporation devices, in an atmosphere of 1-50 m bar. In gas evaporation, a high residual gas pressure causes the formation of ultra fine particles (100 nm) by gas phase collision. The ultrafiine particles are formed by collision of evaporated atoms with residual gas molecules. Gas pressures greater than 3 mPa (10 torr) are required.  Vaporization sources may be resistive heating, high energy electron beams, low energy electron beam and inducting heating. Clusters form in the vicinity of the source by homogenous nucleation in the gas phase grew by incorporation by atoms in the gas phase. It comprises of a ultra high vacuum (UHV) system fitted evaporation source, a cluster collection device of liquid nitrogen filled cold finger scrapper assembly and compaction device. During heating, atoms condense in the supersaturation zone close to Joule heating device. The nanoparticles are removed by scrapper in the form of a metallic plate. Evaporation is to be done from W, Ta or Mo refractory metal crucibles. If the metals react with crucibles, electron beam evaporation technique is to be used. The method is extremely slow. The method suffers from limitations such as a source-precursor incompatibility, temperature ranges and dissimilar evaporation rates in an alloy. Alternative sources have been developed over the years. For instance, Fe is evaporated into an inert gas atmosphere (He). Through collision with the atoms the evaporated Fe atoms loose kinetic energy and condense in the form of small crystallite crystals, which accumulate as a loose powder. Sputtering or laser evaporation may be used instead of thermal evaporation. Sputtering is a non-thermal process in which surface atoms are physically ejected from the surface by momentum transfer from an energetic bombarding species of atomic/molecular size. Typical sputtering uses a glow discharge or ion beam. Interaction events which occur at and near the target surface during the sputtering process in magnetron sputtering has advantage over diode and triode sputtering. In magnetron sputtering, most of the plasma is confined to the near target region. Other alternate energy sources which have been successfully used to produce clusters or ultra fine particles are sputtering electron beam heating and plasma methods. Sputtering has been used in low pressure environment to produce a variety of clusters including Ag, Fe and Si.

Vacuum Deposition and Vaporization

Before proceeding to the other methods, it is important to understand the terms vacuum deposition and vaporization or vacuum evaporation. In vacuum deposition process, elements, alloys or compounds are vaporized and deposited in a vacuum . The vaporization source is the one that vaporizes materials by thermal processes. The process is carried out at pressure of less than 0.1 Pa (1 m Torr) and in vacuum levels of 10 to 0.1 MPa. The substrate temperature ranges from ambient to 500°C. The saturation or equilibrium vapor pressure of a material is defined as the vapor pressure of the material in equilibrium with the solid or liquid surface. For vacuum deposition, a reasonable deposition rate can be obtained if the vaporization rate is fairly high. A useful deposition rate is obtained at a vapor pressure of 1.3 Pa (0.01 Torr).

Vapor phase nucleation can occur in dense vapor cloud by multibody collisions, The atoms are passed through a gas to provide necessary collision and cooling for nucleation. These particles are in the range of 1 to 100 nm and are called ultra fine particles or clusters. The advantages associated with vacuum deposition process are high deposition rates and economy. However, the deposition of many compounds is difficult. Nanoparticles produced from a supersaturated vapor are usually longer than the cluster.

Chemical Vapor Deposition (CVD) and Chemical Vapor Condensation (CVC)

CVD is a well known process in which a solid is deposited on a heated surface via a chemical reaction from the vapor or gas phase. CVC reaction requires activation energy to proceed. This energy can be provided by several methods. In thermal CVD the reaction is activated by a high temperature above 900oC. A typical apparatus comprises of gas supply system, deposition chamber and an exhaust system. In plasma CVD, the reaction is activated by plasma at temperatures between 300 and 700°C. In laser CVD, pyrolysis occurs when laser thermal energy heats an absorbing substrate. In photo-laser CVD, the chemical reaction is induced by ultra violet radiation which has sufficient photon energy, to break the chemical bond in the reactant molecules. In this process, the reaction is photon activated and deposition occurs at room temperature. Nano composite powders have been prepared by CVD. SiC/Si3N composite powder was prepared using SiH4, CH4, WF6 and H2 as a source of gas at 1400°C. Another process called chemical vapor condensation (CVC) was developed in Germany in 1994. It involves pyrolysis of vapors of metal organic precursors in a reduced pressure atmosphere. Particles of ZrO2, Y2O3 and nanowhiskers have been produced by CVC method. A metalorganic precursor is introduced in the hot zone of the reactor using mass flow controller. For instance, hexamethyldisilazane (CH3)3 Si NHSi (CH3)3 was used to produce SiCxNyOz powder by CVC technique. The reactor allows synthesis of mixtures of nanoparticles of two phases or doped nanoparticles by supplying two precursors at the front end of reactor and coated nanoparticles, n-ZrO2, coated with n-Al2O3 by supplying a second precursor in a second stage of reactor. The process yields quantities in excess of 20 g/hr. The yield can be further improved by enlarging the diameter of hot wall reactor and mass of fluid through the reactor. Typical nanocrystalline materials which have been synthesized are shown in Table 1.

Table 1. Typical nanocrystalline materials synthesized by the CVC method

Precursor Product Powder Phase
as prepared
Average Particle size (nm) Surface Area (m2/g)
(CH3) 3SiNHSi(CH3) 3 SiCxNyOz Amorphous 4 377
Si(CH3)4 SiC β-phase 9 201
Al[2-OC4H9]3 Al2O3 Amorphous 3.5 449
Ti[I-OC3H7]4 TiO2 Anatase 8 193
Si[OC2H5]4 SiO2 Amorphous 6 432
Zr[3-OC4H9]4 ZrO2 Monoclinic 7 134

Mechanical Attrition

Unlike many of the methods mentioned above, mechanical attrition produces its nanostructures not by cluster assembly but by the structural decomposition of coarser grained structures as a result of plastic deformation. Elemental powders of Al and β-SiC were prepared in a high energy ball mill. More recently, ceramic/ceramic nanocomposite WC-14% MgO material has been fabricated. The ball milling and rod milling techniques belong to the mechanical alloying process which has received much attention as a powerful tool for the fabrication of several advanced materials. Mechanical alloying is a unique process, which can be carried out at room temperature. The process can be performed on both high energy mills, centrifugal type mill and vibratory type mill, and low energy tumbling mill.

Examples of High Energy Mills

High energy mills include:

  • Attrition Ball Mill
  • Planetary Ball Mill
  • Vibrating Ball Mill
  • Low Energy Tumbling Mill
  • High Energy Ball Mill

Attrition Ball Mill

The milling procedure takes place by a stirring action of a agitator which has a vertical rotator central shaft with horizontal arms (impellers). The rotation speed was later increased to 500 rpm. Also, the milling temperature was in greater control.

Planetary Ball Mill

Centrifugal forces are caused by rotation of the supporting disc and autonomous turning of the vial. The milling media and charge powder alternatively roll on the inner wall of the vial and are thrown off across the bowl at high speed (360 rpm).

Vibrating Ball Mill

It is used mainly for production of amorphous alloys. The changes of powder and milling tools are agitated in the perpendicular direction at very high speed (1200 rpm).

Low Energy Tumbling Mill

They have been used for successful preparation of mechanically alloyed powder. They are simple to operate with low operation costs. A laboratory scale rod mill was used to prepare homogenous amorphous Al30Ta70 powder by using S.S. cylinder rods. Single-phase amorphous powder of AlxTm100-x with low iron concentration can be formed by this technique.

High Energy Ball Mill

High-energy ball milling is an already established technology, however, it has been considered dirty because of contamination problems with iron. However, the use of tungsten carbide component and inert atmosphere and /or high vacuum processes has reduced impurity levels to within acceptable limits. Common drawbacks include low surface, highly poly disperse size distribution, and partially amorphous state of the powder. These powders are highly reactive with oxygen, hydrogen and nitrogen. Mechanical alloying leads to the fabrication of alloys, which cannot be produced by conventional techniques. It would not be possible to produce an alloy of Al-Ta, because of the difference in melting points of Al (933 K) and Ta (3293 K) by any conventional process. However, it can be fabricated by mechanical alloying using ball milling process.

Other Processes

Several other processes such as hydrodynamic cavitation micro emulsion and sonochemical processing techniques have also been used. In cavitation process nanoparticles are generated through creation and release of gas bubbles inside the sol-gel solution. By pressurizing in super critical drying chamber and exposing to cavitational disturbances and high temperature heating, the sol-gel is mixed. Te erupted hydrodynamic bubbles cause the nucleation, growth and quenching of nanoparticles. Particle size can be controlled by adjusting pressure and solution retention times.

Sol-Gel Techniques

In addition to techniques mentioned above, the sol-gel processing techniques have also been extensively used. Colloidal particles are much larger than normal molecules or nanoparticles. However, upon mixing with a liquid colloids appear bulky whereas the nanosized molecules always look clear. It involves the evolution of networks through the formation of colloidal suspension (sol) and gelatin to form a network in continuous liquid phase (gel). The precursor for synthesizing these colloids consists of ions of metal alkoxides and aloxysilanes. The most widely used are tetramethoxysilane (TMOS), and tetraethoxysilanes (TEOS) which form silica gels. Alkoxides are immiscible in water. They are organo metallic precursors for silica, aluminum, titanium, zirconium and many others. Mutual solvent alcohol is used. The sol gel process involves initially a homogeneous solution of one or more selected alkoxides. These are organic precursors for silica, alumina, titania, zirconia, among others. A catalyst is used to start reaction and control pH. Sol-gel formation occurs in four stages.

  • Hydrolysis
  • Condensation
  • Growth of particles
  • Agglomeration of particles

Hydrolysis

During hydrolysis, addition of water results in the replacement of [OR] group with [OH-] group. Hydrolysis occurs by attack of oxygen on silicon atoms in silica gel. Hydrolysis can be accelerated by adding a catalyst such as HCl and NH3. Hydrolysis continues until all alkoxy groups are replaced by hydroxyl groups. Subsequent condensation involving silanol group (Si-OH) produced siloxane bonds (Si-O-Si) and alcohol and water. Hydrolysis occurs by attack of oxygen contained in the water on the silicon atom.

Condensation

Polymerization to form siloxane bond occurs by either a water producing or alcohol producing condensation reaction. The end result of condensation products is the formation of monomer, dimer, cyclic tetramer, and high order rings. The rate of hydrolysis is affected by pH, reagent concentration and H2O/Si molar ratio (in case of silica gels). Also ageing and drying are important. By control of these factors, it is possible to vary the structure and properties of sol-gel derived inorganic networks.

Growth and Agglomeration

As the number of siloxane bonds increase, the molecules aggregate in the solution, where they form a network, a gel is formed upon drying. The water and alcohol are driven off and the network shrinks. At values of pH of greater then 7, and H2O/Si value ranging from 7 to 5. Spherical nano-particles are formed. Polymerization to form siloxane bonds by either an alcohol producing or water producing condensate occurs.

2 HOSi (OR)3 → (OR)3 Si O Si (OR)3 + H2O

or

2 HOSi (OR) 3 → (OR)2OH Si O Si (OR)3 + H2O

Above pH of 7, Silica is more soluble and silica particles grow in size. Growth stops when the difference in solubility between the smallest and largest particles becomes indistinguishable. Larger particles are formed at higher temperatures. Zirconium and Yttrium gels can be similarly produced.

Despite improvements in both chemical and physical methods of synthesis, there remain some problems and limitations. Laser vaporization technique has offered several advantages over other heating techniques. A high energy pulsed laser with an intensity flux of 106 – 107 W/cm2 is forced on target material. The plasma causes high vaporization and high temperature (10,000°C). Typical yields are 1014-1015 atoms from the surface area of 0.01 cm2 in a 10-8 s pulse. Thus a high density of vapor is produced in a very short time (10-8 s), which is useful for direct deposition of particles.

Electrodeposition

Nanostructured materials can also be produced by electrodeposition. These films are mechanically strong, uniform and strong. Substantial progress has been made in nanostructured coatings applied either by DVD or CVD. Many other non-conventional processes such as hypersonic plasma particle deposition (HPPD) have been used to synthesize and deposit nanoparticles. The significant potential of nanomaterial synthesis and their applications is virtually unexplored. They offer numerous challenges to overcome. Understanding more of synthesis would help in designing better materials. It has been shown that certain properties of nanostructured deposits such as hardness, wear resistance and electrical resistivity are strongly affected by grain size. A combination of increased hardness and wear resistance results in a superior coating performance.

CHEMICAL BOND IN NANOTECHNOLOGY

The properties of nanoparticles can be customized for use in a particular nanotechnology application by bonding molecules to the nanoparticles in a process called functionalization. In addition, the capability to build nanocomposites, materials formed by integrating nanoparticles into the structure of a bulk material, makes it possible to create new materials that offer a range of new possibilities.

Fundamentals of nanotech functionalization

When an atom is attached to another atom, the attachment is called a chemical bond. Functionalization is a process that involves attaching atoms or molecules to the surface of a nanoparticle with a chemical bond to change the properties of that nanoparticle.

The bond used in functionalization can be either a covalent bond or a van der Waals bond. Covalent bonding, in which electrons are shared between the atoms involves an atom on the nanoparticle sharing electrons with an atom on the molecule, creating a very strong bond.

In a van der Waals bond, electrostatic attraction occurs (negative and positive charges on the molecules and nanoparticles attract each other). A positively charged region of the molecule or nanoparticle and a negatively charged region of the molecule or nanoparticle form a bond. The van der Waals bond is not as strong as a covalent bond, but it also does not weaken the structures being bonded, as covalent bonds do.

Functionalizing a carbon nanotube by covalently bonding molecules to it.

Functionalizing a carbon nanotube by covalently bonding molecules to it.

For example, if you are bonding molecules to carbon nanotubes, a covalent bond might weaken the nanotube while a van der Waals bond would not. Therefore, although covalent bonds are used more often for functionalization, van der Waals bonding is sometimes useful. One such use is functionalizing a carbon nanotube by bonding a molecule to the nanotube using van der Waals force.

Functionalizing a carbon nanotube by attaching a molecule to it using van der Waals bonding.

Functionalizing a carbon nanotube by attaching a molecule to it using van der Waals bonding.

Functionalization is used to prepare nanoparticles for many uses, for example:

  • Making sensor elements that can be used to detect very low levels of chemical or biological molecules or for the diagnosis of a blood sample.

  • Bonding nanoparticles to fibers or polymers to form lightweight, high-strength composites.

  • Making nanoparticles that can bond to biological molecules present on the surface of diseased cells to produce targeted drug delivery agents.

  • Making nanoparticles that are attracted to prepared attachment sites, such as surfaces containing certain types of atoms (sulfur is attracted to gold, for example) for self-aligned assembly.

Make nanocomposites from functionalized nanoparticles

When you include functionalized nanoparticles in a composite material, those nanoparticles can form covalent bonds with the primary material used in the composite. For example, functionalized nanotubes can bond with polymers to produce a stronger plastic. In a carbon fiber composite, functionalized nanotubes bond with the carbon fibers to create a stronger structure.

Functionalized nanotubes forming a strong bond with carbon fibers.

Functionalized nanotubes forming a strong bond with carbon fibers.

Nanocomposites are being used in several applications:

  • A variety of nanoparticles such as buckyballs, nanotubes, and silica nanoparticles are being used with various fibers to form nanocomposites used in sports equipment such as tennis racquets to improve their strength or stiffness while keeping them lightweight.

  • Nanocomposites using carbon nanotubes and polymers are being developed to make lighter-weight spacecraft.

  • Nanocomposites using carbon nanotubes in an epoxy are being used to make windmill blades longer, enabling the windmill to generate more electricity.

  • Nanoparticles of clay are used in plastic composites to reduce the leakage of carbon dioxide from plastic bottles, improving the shelf life of carbonated beverages.

  • Composites of nanoparticles and polymers are being developed to produce lightweight, strong plastics to replace metals in cars.

    VSEPR  MODEL

  • Predicting the Shapes of Molecules

    There is no direct relationship between the formula of a compound and the shape of its molecules. The shapes of these molecules can be predicted from their Lewis structures, however, with a model developed about 30 years ago, known as the valence-shell electron-pair repulsion (VSEPR) theory.

    The VSEPR theory assumes that each atom in a molecule will achieve a geometry that minimizes the repulsion between electrons in the valence shell of that atom. The five compounds shown in the figure below can be used to demonstrate how the VSEPR theory can be applied to simple molecules.

    Table of Geometries

    There are only two places in the valence shell of the central atom in BeF2 where electrons can be found. Repulsion between these pairs of electrons can be minimized by arranging them so that they point in opposite directions. Thus, the VSEPR theory predicts that BeF2 should be a linear molecule, with a 180o angle between the two Be-F bonds.

    Structure

    There are three places on the central atom in boron trifluoride (BF3) where valence electrons can be found. Repulsion between these electrons can be minimized by arranging them toward the corners of an equilateral triangle. The VSEPR theory therefore predicts a trigonal planar geometry for the BF3 molecule, with a F-B-F bond angle of 120o.

    Structure

    BeF2 and BF3 are both two-dimensional molecules, in which the atoms lie in the same plane. If we place the same restriction on methane (CH4), we would get a square-planar geometry in which the H-C-H bond angle is 90o. If we let this system expand into three dimensions, however, we end up with a tetrahedral molecule in which the H-C-H bond angle is 109o28′.

    Structure

    Repulsion between the five pairs of valence electrons on the phosphorus atom in PF5 can be minimized by distributing these electrons toward the corners of a trigonal bipyramid. Three of the positions in a trigonal bipyramid are labeled equatorial because they lie along the equator of the molecule. The other two are axial because they lie along an axis perpendicular to the equatorial plane. The angle between the three equatorial positions is 120o, while the angle between an axial and an equatorial position is 90o.

    Structure

    There are six places on the central atom in SF6 where valence electrons can be found. The repulsion between these electrons can be minimized by distributing them toward the corners of an octahedron. The term octahedron literally means “eight sides,” but it is the six corners, or vertices, that interest us. To imagine the geometry of an SF6 molecule, locate fluorine atoms on opposite sides of the sulfur atom along the X, Y, and Z axes of an XYZ coordinate system.

    Structure


    Incorporating Double and Triple Bonds Into the VSEPR Theory

    Compounds that contain double and triple bonds raise an important point: The geometry around an atom is determined by the number of places in the valence shell of an atom where electrons can be found, not the number of pairs of valence electrons. Consider the Lewis structures of carbon dioxide (CO2) and the carbonate (CO32-) ion, for example.

    Structures

    There are four pairs of bonding electrons on the carbon atom in CO2, but only two places where these electrons can be found. (There are electrons in the C=O double bond on the left and electrons in the double bond on the right.) The force of repulsion between these electrons is minimized when the two C=O double bonds are placed on opposite sides of the carbon atom. The VSEPR theory therefore predicts that CO2 will be a linear molecule, just like BeF2, with a bond angle of 180o.

    The Lewis structure of the carbonate ion also suggests a total of four pairs of valence electrons on the central atom. But these electrons are concentrated in three places: The two C-O single bonds and the C=O double bond. Repulsions between these electrons are minimized when the three oxygen atoms are arranged toward the corners of an equilateral triangle. The CO32- ion should therefore have a trigonal-planar geometry, just like BF3, with a 120o bond angle.


    The Role of Nonbonding Electrons in the VSEPR Theory

    The valence electrons on the central atom in both NH3 and H2O should be distributed toward the corners of a tetrahedron, as shown in the figure below. Our goal, however, isn’t predicting the distribution of valence electrons. It is to use this distribution of electrons to predict the shape of the molecule. Until now, the two have been the same. Once we include nonbonding electrons, that is no longer true.

    Diagram

    The VSEPR theory predicts that the valence electrons on the central atoms in ammonia and water will point toward the corners of a tetrahedron. Because we can’t locate the nonbonding electrons with any precision, this prediction can’t be tested directly. But the results of the VSEPR theory can be used to predict the positions of the nuclei in these molecules, which can be tested experimentally. If we focus on the positions of the nuclei in ammonia, we predict that the NH3 molecule should have a shape best described as trigonal pyramidal, with the nitrogen at the top of the pyramid. Water, on the other hand, should have a shape that can be described as bent, or angular. Both of these predictions have been shown to be correct, which reinforces our faith in the VSEPR theory.

    When we extend the VSEPR theory to molecules in which the electrons are distributed toward the corners of a trigonal bipyramid, we run into the question of whether nonbonding electrons should be placed in equatorial or axial positions. Experimentally we find that nonbonding electrons usually occupy equatorial positions in a trigonal bipyramid.

    To understand why, we have to recognize that nonbonding electrons take up more space than bonding electrons. Nonbonding electrons need to be close to only one nucleus, and there is a considerable amount of space in which nonbonding electrons can reside and still be near the nucleus of the atom. Bonding electrons, however, must be simultaneously close to two nuclei, and only a small region of space between the nuclei satisfies this restriction.

    Because they occupy more space, the force of repulsion between pairs of nonbonding electrons is relatively large. The force of repulsion between a pair of nonbonding electrons and a pair of bonding electrons is somewhat smaller, and the repulsion between pairs of bonding electrons is even smaller.

    The figure below can help us understand why nonbonding electrons are placed in equatorial positions in a trigonal bipyramid.

    Diagram

    If the nonbonding electrons in SF4 are placed in an axial position, they will be relatively close (90o) to three pairs of bonding electrons. But if the nonbonding electrons are placed in an equatorial position, they will be 90o away from only two pairs of bonding electrons. As a result, the repulsion between nonbonding and bonding electrons is minimized if the nonbonding electrons are placed in an equatorial position in SF4.

    The results of applying the VSEPR theory to SF4, ClF3, and the I3 ion are shown in the figure below.

    Diagram

    When the nonbonding pair of electrons on the sulfur atom in SF4 is placed in an equatorial position, the molecule can be best described as having a see-saw or teeter-totter shape. Repulsion between valence electrons on the chlorine atom in ClF3 can be minimized by placing both pairs of nonbonding electrons in equatorial positions in a trigonal bipyramid. When this is done, we get a geometry that can be described as T-shaped. The Lewis structure of the triiodide (I3) ion suggests a trigonal bipyramidal distribution of valence electrons on the central atom. When the three pairs of nonbonding electrons on this atom are placed in equatorial positions, we get a linear molecule.

    Molecular geometries based on an octahedral distribution of valence electrons are easier to predict because the corners of an octahedron are all identical.

 

Intermolecular Interactions in the Gas Phase

 
 

Interactions between two or more molecules are called intermolecular interactions, while the interactions between the atoms within a molecule are called intramolecular interactions.  Intermolecular interactions occur between all types of molecules or ions in all states of matter.  They range from the strong, long-distance electrical attractions and repulsions between ions to the relatively weak dispersion forces which have not yet been completely explained.  The various types of interactions are classified as (in order of decreasing strength of the interactions):

ion – ion
ion – dipole
dipole – dipole
ion – induced dipole
dipole – induced dipole
dispersion forces

Without these interactions, the condensed forms of matter (liquids and solids) would not exist except at extremely low temperatures.  We will explore these various forces and interactions in the gas phase to understand why some materials vaporize at very low temperatures, and others persist as solids or liquids to extremely high temperatures.

Ion – Ion Interactions

The interactions between ions (ion – ion interactions) are the easiest to understand: like charges repel each other and opposite charges attract.  These Coulombic forces operate over relatively long distances in the gas phase.  The force depends on the product of the charges (Z1, Z2) divided by the square of the distance of separation (d2):

 F = – Z1Z2/d2

 Two oppositely-charged particles flying about in a vacuum will be attracted toward each other, and the force becomes stronger and stronger as they approach until eventually they will stick together and a considerable amount of energy will be required to separate them.  They form an ion-pair, a new particle which has a positively-charged area and a negatively-charged area.  There are fairly strong interactions between these ion pairs and free ions, so that these the clusters tend to grow, and they will eventually fall out of the gas phase as a liquid or solid (depending on the temperature).

Ion – Ion Interactions in the Gas Phase

top

 

Dipole Moment

Let’s go back to that first ion pair which was formed when the positive ion and the negative ion came together.  If the electronegativities of the elements are sufficiently different (like an alkali metal and a halide), the charges on the paired ions will not change appreciably – there will be a full electron charge on the blue ion and a full positive charge on the red ion.  The bond formed by the attraction of these opposite charges is called an ionic bond.  If the difference in electronegativity is not so great, however, there will be some degree of sharing of the electrons between the two atoms.  The result is the same whether two ions come together or two atoms come together:

Polar Molecule

The combination of atoms or ions is no longer a pair of ions, but rather a polar molecule which has a measureable dipole moment.  The dipole moment (D) is defined as if there were a positive (+q) and a negative (-q) charge separated by a distance (r):
                        D = qr
If there is no difference in electronegativity between the atoms (as in a diatomic molecule such as O2 or F2) there is no difference in charge and no dipole moment.  The bond is called acovalent bond, the molecule has no dipole moment, and the molecule is said to be non-polar. Bonds between different atoms have different degrees of ionicity depending on the difference in the electronegativities of the atoms.  The degree of ionicity may range from zero (for a covalent bond between two atoms with the same electronegativity) to one (for an ionic bond in which one atom has the full charge of an electron and the other atom has the opposite charge).  In some cases, two or more partially ionic bonds arranged symmetrically around a central atom may mutually cancel each other’s polarity, resulting in a non-polar molecule.  An example of this is seen in the carbon tetrachloride (CCl4) molecule.  There is a substantial difference between the electronegativities of carbon (2.55) and chlorine (3.16), but the four chlorine atoms are arranged symmetrically about the carbon atom in atetrahedral configuration, and the molecule has zero dipole momentSaturated hydrocarbons (CnHn+2) are non-polar molecules because of the small difference in the electronegativities of carbon and hydrogen plus the near symmetry about each carbon atom.

Non-polar Molecule

top

Polar molecules can interact with ions:

Ion – Dipole Interactions

or with other polar molecules:

Dipole – Dipole Interactions

top

 

The charges on ions and the charge separation in polar molecules explain the fairly strong interactions between them, with very strong ion – ion interactions, weaker ion – dipole interactions, and considerably weaker dipole – dipole interactions.  Even in a non-polar molecule, however, the valence electrons are moving around and there will occasionally be instances when more are on one side of the molecule than on the other.  This gives rise to fluctuating or instantaneous dipoles:

Fluctuating Dipole in a Non-polar Molecule

These instantaneous dipoles may be induced and stabilized as an ion or a polar molecule approaches the non-polar molecule.

Ion – Induced Dipole Interaction

Dipole – Induced Dipole Interaction

top

 

Dispersion Forces

Interactions between ions, dipoles, and induced dipoles account for many properties of molecules – deviations from ideal gas behavior in the vapor state, and the condensation of gases to the liquid or solid states.  In general, stronger interactions allow the solid and liquid states to persist to higher temperatures.  However, non-polar molecules show similar behavior, indicating that there are some types of intermolecular interactions that cannot be attributed to simple electrical attractions.  These interactions are generally called dispersion forces.  Electrical forces operate when the molecules are several molecular diameters apart, and become stronger as the molecules or ions approach each other.  Dispersion forces are very weak until the molecules or ions are almost touching each other, as in the liquid state.  These forces appear to increase with the number of “contact points” with other molecules, so that long non-polar molecules such as n-octane (C8H18) may have stronger intermolecular interactions than very polar molecules such as water (H2O), and the boiling point of n-octane is actually higher than that of water.

Dispersion Forces

 

It is possible that these forces arise from the fluctuating dipole of one molecule inducing an opposing dipole in the other molecule, giving an electrical attraction.  It is also possible that these interactions are due to some sharing of electrons between the molecules in “intermolecular orbitals“, similar to the “molecular orbitals” in which electrons from two atoms are shared to form a chemical bond.  These dispersion forces are assumed to exist between all molecules and/or ions when they are sufficiently close to each other.  The stronger farther-reaching electrical forces from ions and dipoles are considered to operate in addition to these forces.

Chemical Bond Types

Overview

Ionic Bonds

An ionic bond is formed by the attraction of oppositely charged atoms or groups of atoms. When an atom (or group of atoms) gains or loses one or more electrons, it forms an ion. Ions have either a net positive or net negative charge. Positively charged ions are attracted to the negatively charged ‘cathode’ in an electric field and are called cations. Anions are negatively charged ions named as a result of their attraction to the positive ‘anode’ in an electric field.

Every ionic chemical bond is made up of at least one cation and one anion.

Ionic bonding is typically described to students as being the outcome of the transfer of electron(s) between two dissimilar atoms. The Lewis structure below illustrates this concept.

ionic NaCl

For binary atomic systems, ionic bonding typically occurs between one metallic atom and one nonmetallic atom. The electronegativity difference between the highly electronegative nonmetal atom and the metal atom indicates the potential for electron transfer.

Sodium chloride (NaCl) is the classic example of ionic bonding. Ionic bonding is not isolated to simple binary systems, however. An ionic bond can occur at the center of a large covalently bonded organic molecule such as an enzyme. In this case, a metal atom, like iron, is both covalently bonded to large carbon groups and ionically bonded to other simpler inorganic compounds (like oxygen). Organic functional groups, like the carboxylic acid group depicted below, contain covalent bonding in the carboxyl portion of the group (HCOO) which itself serves as the anion to the acidic hydrogen ion (cation).

HCOOH

Covalent

A covalent chemical bond results from the sharing of electrons between two atoms with similar electronegativities A single covalent bond represent the sharing of two valence electrons (usually from two different atoms). The Lewis structure below represents the covalent bond between two hydrogen atoms in a H2 molecule.

H2
h2b
Dot Structure
Line Structure

Multiple covalent bonds are common for certain atoms depending upon their valence configuration. For example, a double covalent bond, which occurs in ethylene (C2H4), results from the sharing of two sets of valence electrons. Atomic nitrogen (N2) is an example of a triple covalent bond.

Double Covalent Bond

Double Bond

 

Triple Covalent Bond

N2
N2b

The polarity of a covalent bond is defined by any difference in electronegativity the two atoms participating. Bond polarity describes the distribution of electron density around two bonded atoms. For two bonded atoms with similar electronegativities, the electron density of the bond is equally distributed between the two atom is This is anonpolar covalent bond. The electron density of a covalent bond is shifted towards the atom with the largest electronegativity. This results in a net negative charge within the bond favoring the more electronegative atom and a net positive charge for the least electronegative atom. This is a polar covalent bond.

Polar Bond

Coordinate Covalent

A coordinate covalent bond (also called a dative bond) is formed when one atom donates both of the electrons to form a single covalent bond. These electrons originate from the donor atom as an unshared pair.

Coordinate Formula

Both the ammonium ion and hydronium ion contain one coordinate covalent bond each. A lone pair on the oxygen atom in water contributes two electrons to form a coordinate covalent bond with a hydrogen ion to form the hydronium ion. Similarly, a lone pair on nitrogen contributes 2 electrons to form the ammonium ion. All of the bonds in these ions are indistinguishable once formed, however.

Ammonium
Hydronium
Ammonium (NH4+)
Hydronium (H3O+)

Network Covalent

Some elements form very large molecules by forming covalent bonds. When these molecules repeat the same structure over and over in the entire piece of material, the bonding of the substance is called network covalent. Diamond is an example of carbon bonded to itself. Each carbon forms 4 covalent bonds to 4 other carbon atoms forming one large molecule the size of each crystal of diamond.

Diamond
 

Silicates, [SiO2]x also form these network covalent bonds. Silicates are found in sand, quartz, and many minerals.

Quartz

Metallic

The valence electrons of pure metals are not strongly associated with particular atoms. This is a function of their low ionization energy. Electrons in metals are said to be delocalized (not found in one specific region, such as between two particular atoms).

Since they are not confined to a specific area, electrons act like a flowing “sea”, moving about the positively charged cores of the metal atoms.

  • Delocalization can be used to explain conductivity, malleability, and ductility.
  • Because no one atom in a metal sample has a strong hold on its electrons and shares them with its neighbors, we say that they are bonded.
  • In general, the greater the number of electrons per atom that participate in metallic bonding, the stronger the metallic bond.

Bonds

So far, we’ve studied atoms and compounds and how they react with each other. Now let’s take a look at how these atoms and molecules hold together. Bonds hold atoms and molecules of substances together. There are several different kinds of bonds; the type of bond seen in elements and compounds depends on the chemical properties as well as the attractive forces governing the atoms and molecules. The three types of chemical bonds are Ionic bonds, Covalent bonds, and Polar covalent bonds. Chemists also recognize hydrogen bonds as a fourth form of chemical bond, though their properties align closely with the other types of bonds.

In order to understand bonds, you must first be familiar with electron properties, including valence shell electrons. The valence shell of an atom is the outermost layer (shell) of an electron. Though today scientists generally agree that electrons do not rotate around the nucleus, it was thought throughout history that each electron orbited the nucleus of an atom in a separate layer (shell). Today, scientists have concluded that electrons hover in specific areas of the atom and do not form orbits; however, the valence shell is still used to describe electron availability.

One can determine how many electrons an atom will have by looking at its periodic properties. In order to determine an element’s periodic properties, you will need to locate a periodic table. After you’ve found your periodic table, look at the roman numerals above each column of the table. You should see that above Hydrogen, there’s a IA, above Beryllium there’s a IIA, above Boron there’s a IIIA, and so on all the way to Fluorine, which is VIIA. Also, note that the metals are all in group B—their roman numerals have the letter B afterwards instead of the letter A. For now, we are going to ignore the columns with a B, and focus on the columns with an A (the non-metals, generally speaking). Once you have located the group-A elements, we are going to count across, giving each column a number, like this:

The first A-column is I (1), then counting across, 2-8 (skipping the B group, which consists of metals). In the periodic table we labeled the 8th column as 0, however when counting electrons, we’ll count it as 8. Now, we can determine how many valence electrons each element has in its outermost shell. The elements in the IA column have 1 valence electron. The elements in the IIA column have 2 bonding electrons, and so on. By the time we get to the noble gases (the column labeled 0), we are up to 8 bonding electrons. This means that these gases can stand on their own, or donate electrons to another element, but they cannot accept any more electrons. This is because the electrons they have satisfy the octet rule.

The Octet and Duet Rules

When it comes to bonding, everything is based on how many electrons an element has or shares with its compound partner or partners. The octet rule is followed by most elements, and it says that to be stable, an atom needs to have eight electrons in its outermost shell. Elements that do not follow the octet rule are H, He, B, Li and Be (sometimes). Lithium gives up an electron whereas the other elements listed here gain one. These elements instead follow the duet rule which says that the atoms only need two valence electrons to be stable. When bonding, stability is always considered and preferred. Therefore, atoms bond in order to become more stable than they already are.

Not all atoms bond the same way, so we need to learn the different types of bonds that atoms can form. There are three (sometimes four) recognized chemical bonds; they are ionic, covalent, polar covalent, and (sometimes) hydrogen bonds.

Ionic Bonds

Ionic bonds form when two atoms have a large difference in electronegativity. (Electronegativity is the quantitative representation of an atom’s ability to attract an electron to itself). Although scientists do not have an exact value to signal an ionic bond, the amount is generally accepted as 1.7 and over to qualify a bond as ionic. Ionic bonds often occur between metals and salts; chloride is often the bonding salt. Compounds displaying ionic bonds form ionic crystals in which ions of positive and negative charges hover near each other, but there is not always a direct 1-1 correlation between positive and negative ions. Ionic bonds can typically be broken through hydrogenation, or the addition of water to a compound.

Covalent Bonds

Covalent bonds form when two atoms have a very small (nearly insignificant) difference in electronegativity. The value of difference in electronegativity between two atoms in a covalent bond is less than 1.7. Covalent bonds often form between similar atoms, nonmetal to nonmetal or metal to metal. Covalent bonding signals a complete sharing of electrons. There is usually a direct correlation between positive and negative ions, meaning that because they share electrons, the atoms balance. Covalent bonds are usually strong because of this direct bonding.

Polar Covalent Bonds

Polar covalent bonds fall between ionic and covalent bonds. They result when two elements bond with a moderate difference in electronegativity moderately to greatly, but they do not surpass 1.7 in electronegativity difference. Although polar covalent bonds are classified as covalent, they do have significant ionic properties. They also induce dipole-dipole interactions, where one atom becomes slightly negative and the other atom becomes slightly positive. However, the slight change in charge is not large enough to classify it entirely as an ion; they are simply considered slightly positive or slightly negative. Polar covalent bonds often indicate polar molecules, which are likely to bond with other polar molecules but are unlikely to bond with non-polar molecules.

Hydrogen Bonds

Hydrogen bonds only form between hydrogen and oxygen (O), nitrogen (N) or fluorine (F). Hydrogen bonds are very specific and lead to certain molecules having special properties due to these types of bonds. Hydrogen bonding sometimes results in the element that is not hydrogen (oxygen, for example) having a lone pair of electrons on the atom, making it polar. Lone pairs of electrons are non-bonding electrons that sit in twos (pairs) on the central atom of the compound. Water, for example, exhibits hydrogen bonding and polarity as a result of the bonding. This is shown in the diagram below.

Because of this polarity, the oxygen end of the molecule would repel negative atoms like itself, while attracting positive atoms, like hydrogen. Hydrogen, which becomes slightly positive, would repel positive atoms (like other hydrogen atoms) and attract negative atoms (such as oxygen atoms). This positive and negative attraction system helps water molecules stick together, which is what makes the boiling point of water high (as it takes more energy to break these bonds between water molecules).

In addition to the four types of chemical bonds, there are also three categories bonds fit into: single, double, and triple. Single bonds involve one pair of shared electrons between two atoms. Double bonds involve two pairs of shared electrons between two atoms, and triple bonds involve three pairs of shared electrons between two atoms. These bonds take on different natures due to the differing amounts of electrons needed and able to be given up.

Now, let’s look at determining what types of bonds we see in different compounds. We’ve already looked at the bonds in H2O, which we determined to be hydrogen bonds. However, now let’s look at a few other types of bonds as examples.

Compound: HNO3 (also known as Nitric acid)

There are two different determinations we can make as to what these bonds look like; first we can decide whether the bonds are covalent, polar covalent, ionic, or hydrogen. Then, we can determine if the bonds are single, double, or triple.

In order to decide whether the bonds are covalent, polar covalent, ionic or hydrogen, we need to look at the types of elements seen and the electronegativity values. We look at the elements and see hydrogen, nitrogen, and oxygen—no metals. This rules out ionic bonding as a type of bond seen in the compound. Then, we would look at electronegativity values for nitrogen and oxygen. Oftentimes, this information can be found on a periodic table, in a book index, or an educational online resource. The electronegativity value for oxygen is 3.5 and the electronegativity value for nitrogen is 3.0. The way to determine the bond type is by taking the difference between the two numbers (subtraction). 3.5 – 3.0 = 0.5, so we can determine that the bond between nitrogen and oxygen is a covalent bond. We can also determine, from past knowledge, that the bond between oxygen and hydrogen is a hydrogen bond as it was in water.

Now, we need to count the electrons and draw the diagram for HNO3. For more help counting electrons, please see the page onElectron Configuration. For more help drawing the Lewis structures, please see the page on Lewis Structures. This process combines both of these in order to determine the structure and shape of a molecule of the compound.

First, we determine that N follows the octet rule, so it needs eight surrounding electrons. This is important to keep in mind as we move forward. Next we count up how many valence electrons the compound has as a whole. H gives us 1, N gives us 5, and each O gives us 6. We can discern this from looking at the tops of the columns in the periodic table (see above). We then add these numbers together (3 x 6 = 18, + 1 = 19, + 5 = 24), and we get 24 electrons that we need to distribute throughout the molecule. First, we need to draw the molecule to see how many initial bonds we’ll be putting in. Our preliminary structure looks like this:

Now, we can count how many electrons we have used by counting 2 electrons for each bond placed. We see that we have placed 4 bonds, so we have used 8 electrons. 24 – 8 = 16 electrons that we need to distribute. In order to correctly place the rest of the electrons, we need to determine how many electrons each atom needs to be stable.

The central atom, N, has three bonds attached (equivalent of 6 electrons) so it needs 2 more electrons to be stable. The O to the right has one bond (two electrons) so it needs 6 more to be stable. The O above the N has one bond (two electrons) so it also needs 6 electrons to be stable. The O to the left of the N is bonded both to N and to H, so it has two bonds (4 electrons); therefore, it needs 4 more electrons to be stable. We add up the total amount of electrons needed, 2 + 6 + 6 + 4 = 18, and see that we need 18 electrons to stabilize the compound. We know this is not possible, since we only have 16 available electrons. When this happens, we need to insert a double bond in order to resolve the problem of lack of electrons. This is because, although we count each bond as 2 electrons, the elements joined together in the bond are actually sharing the electrons. Therefore, when we count out the bonds, we are counting some electrons twice because they are shared. This is normal and expected, and resolves not having enough valence electrons. Now, we need to decide where to put the double bond in this compound. We know that the double bond cannot go between O and H, because H does not have enough room to accept another electron. Therefore, we know we must place the bond between N and O. You might be thinking, how do I decide where to put the bond? In this particular example, we can place the bond either between the top O and N, or the right O and N. This is because HNO3 displays resonance.

Here are the ways you can place the double bond:

or

We are going to keep the bond between N and the right O in our example. After we add in the bond, we subtract two more electrons from our available electrons (16) and are left with 14 electrons to distribute. Now we need to make sure we have the correct number of electrons. After placing in the double bond, N is now stable because it has 4 bonds (8 electrons) surrounding it. It does not need any additional electrons. The top O (above N) needs 6 electrons, the right O now only needs 4 electrons (because it has a double bond now, which is 4 electrons), and the left O still needs 4 electrons to become stable. We add these numbers together, 6 + 4 + 4 = 14, and we see that 14 is the number of electrons we have, so we can go ahead and distribute them, like this:

Now, our compound is stable with appropriately distributed valence electrons. We can see that there are three single bonds (H—O, N—O, and N—O) and one double bond (N==O).

Electron Configuration

Electrons play a crucial role in chemical reactions and how compounds interact with each other. Remember, electrons are the negative particles in an atom that “orbit” the nucleus. Although we say they orbit the nucleus, we now know that they are actually in a random state of motion surrounding the nucleus rather than making circles around it, which is what an orbit implies. The best analogy to describe electron motion within an atom is how bees buzz around a beehive. They don’t fly in complete circles around it, but they do hover and move around it in a seemingly random motion.

Electrons increase in elements as protons do, which is from left to right and from top to bottom on the periodic table. Therefore, the element with the fewest electrons would be in the top left-hand corner of the table and the element with the most electrons would be in the bottom right hand corner. The elements are arranged so that the increase from element to element is one electron. Therefore, in the first row, we see hydrogen and helium. This is because hydrogen has one electron and helium has two electrons, so we place them in ascending order.

Electron Orbitals

We categorize electrons according to what orbital level in which they reside. The four orbitals are s, p, d, and f. They are classified by divisions on the periodic table, as follows:

The first orbital is the s orbital. It has room to hold two electrons. The electrons have opposite spins, so it makes sense that they are paired together. The s orbital is a sphere, with the x, y, and z axes passing through it, like this:

This means that the two electrons can occupy any of the space seen in this sphere, and they sort of “hover” around in the given space.

The next orbital is the p orbital. It can hold up to six electrons, therefore it has three sub-orbitals (each can hold two electrons). The spins on electrons are still opposite, this time split into three and three (since the first orbital only held two electrons, we said the spins were opposite. Now that this orbital can hold six electrons, three spin one way and three spin the opposite way). The p orbital is not sphere shaped, however it does have six lobes that are shaped like balloons. Two lobes are on the x axis, two are on the y axis, and two are on the z axis. These three separations are considered sub-orbitals and combine to make up the entire p orbital. The nucleus of the atom is located where these three axes meet. The p orbital looks like this:

The next orbital is the d orbital. It can hold up to 10 electrons, therefore it has five sub-orbitals (each can hold two electrons). The spins of the electrons are opposite, so five are spinning one way and the other five are spinning the opposite way. The d orbital is not sphere shaped; it looks more like the p orbital, except there are more lobes that cannot be shown all at once. We showed the entire p orbital (all three of the sub-orbitals) in one diagram, because there were two lobes on each axis. However, we need to show the five different sub-orbitals of the d orbital in order to fully explain where the lobes are located, and how they are shaped. We will show you four views, with labels on all of the axes.

The first view is of the lobes that lie on the XY plane, shown in aqua here. The second view is a three dimensional view of lobes on the Z axis that rotate 360 degrees around the axis. There are two lobes, one in the top hemisphere and one in the bottom, and a tube-shaped area that circles the Z axis and intersects the X and Y axes. It’s shown here in orange. The third view is of the lobes on the ZY plane, with the X axis running perpendicular to it. It’s shown here in green. The last view is of the lobes lying on the ZX plane, and is shown here in pink. If all of these layers were put together, we would see a sort of star-burst image, with a tube encircling the middle.

The final orbital is the f orbital, and scientists are not completely sure of the shape of its orbital. However, they do have seemingly accurate predictions of where electrons will fall. We will show you the following probabilities of where electrons lie:

We showed you two probabilities of where the f orbitals lie; however, the first image (in blue) is shown on the Z axis. It is actually repeated on the X axis and again on the Y axis. The second image (in orange) is shown in the XYZ dimensions; however, it is repeated three more times for a total of four positions using this shape and lobe configuration. We say that these are probable locations because scientists cannot actually track and determine the exact location of electrons. However, through research and abilities to track electrons in other orbitals, scientists can say that the likely location of f-level electrons is in one of these locations.

Diagonal Rule, or Madelung’s Rule

In chemistry, the Diagonal Rule (also known as Madelung’s Rule) is a guideline explaining the order in which electrons fill the orbital levels. The 1s2 orbital is always filled first, and it can contain 2 electrons. Then the 2s2 level is filled, which can also hold 2 electrons. After that, electrons begin to fill the 2p6 orbital, and so on. The diagonal rule provides a rule stating the exact order in which these orbitals are filled, and looks like this:

As you can see, the red arrows indicate the filling of orbital levels. Starting at the top, the first red arrow crosses the 1s2 orbital. If you follow these arrows down the list, you can easily determine the order that electrons fill the orbital levels.

There is an exception to this rule when filling the orbitals of heavier electrons. For example, when filling the 5s2 orbitals, the rule says that 5s2 will fill, and then 4d10 will fill. However, when filling these orbitals for certain metals, only one electron will fill the 5s2 orbital, and the next electron will jump into the 4d10 orbital. This can be predicted, but cannot be exactly determined until it is observed. The same is true for the 6s2 orbital-for certain heavy metals, the 6s2 will only contain one electron, and the other electrons will jump to the 5d10 orbital.

Electron Notation

Following the diagonal rule, there is an easy way to write electron configuration. We are simply going to use the orbital names we learned from the diagonal rule (1s2, 2s2 and so on). However, we are only going to write the number of electrons that the atom actually contains. For example, hydrogen has one electron, which would fall in the 1s orbital. Thus, the electron configuration for hydrogen is 1s1. We write the superscript as 1 because there is one electron. Helium, the next element, contains two electrons. They both fill the 1s orbital, so the electron configuration for helium is 1s2. Here again, we write the superscript as 2 because there are two electrons.

Electron configuration moves across and down the periodic table. You might have noticed that we first put one electron in the 1s orbital (with hydrogen), and then we put two electrons in the 1s orbital (with helium). Continuing this trend, we would next have 3 electrons with lithium. We would place two of them in the 1s orbital, and one of them in the 2s orbital, so the electron configuration would be 1s2 2s1. However, we can also write this using the configuration of helium, because it is a noble gas. Noble gases are stable elements, so we can use their configurations in determining other configurations. So, instead of writing 1s2 2s1, we would write [He] 2s1. This means that lithium contains the same configuration as helium, and then has one more electron in the 2s orbital. Notice that we use brackets to encase the previous noble gas, and then we continue writing the configuration as we normally would. This might not seem like a big deal, or a shortcut right now, but once you get pretty far down the periodic table, this will save you a lot of time and energy.

We’ll show you one short example of this. Let’s say we need to determine the electron configuration for Ba, Barium. Counting across the table, we would come up with the following configuration:

1s2 2s2 2p6 3s2 3p6 4s2 3d 10 4p6 5s2 4d10 5p6 6s2

Instead of writing all of this out, we could simply find the previous noble gas, which is Xe. Therefore, we can write [Xe] and then figure out the rest of the configuration. We look and see that Ba is in the 6th row, so we know that we’re going to start with 6s2. We can look and see that Ba is the second element in that row, so it has two electrons to go in the 6s orbital.

Thus, we can conclude that the final electron configuration for Ba is [Xe] 6s2.

Electron Spin

Every electron placed in an orbital has a feature we refer to as “spin.” We’ve already talked about electrons having been thought to have a specific “orbit,” and then later discovered to hover in the places listed above. Well, electrons don’t literally spin, but their movement sort of looks like someone somersaulting, in a very fast, random state. This is called spin. Each electron can either have a +1/2 spin or a -1/2 spin, which indicates its direction of motion. There is never a 0 spin. When filling orbitals, electrons spin in pairs, one + and one -. The electrons with an up (+) spin fill first, and the electrons with a down (-) spin fill second. It would look like this:

and so on. Because the s orbital can hold two electrons, we draw one box in which two electrons (represented here with arrows) can fit. Since the p orbital can hold 6 electrons, we draw 3 boxes that will each hold 2 electrons. This will continue with the d orbital (10 electrons fit in 5 boxes) and the f orbital (14 electrons fit in 7 boxes).

These boxes fill in a certain order-all of the boxes in one column will fill with up spin electrons first, and then down spin. So, for example, if the element has a configuration of 1s2 2s2 2p3, it would look like this:

As you can see, we filled the boxes with up arrows (electrons) first. If we had more electrons, we would go back and add them in to the second column, as down spin electrons.

EE-Unit-V Bioreactor

Bioreactor – can be described as a vessel which has provision of cell cultivation under sterile condition & control of environmental conditions e.g., pH, Temperature, Dissolved oxygen etc.  It can be used for the cultivation of microbial plant or animal cells. A typical bioreactor consists of following parts.

Agitator – This facilitates the mixing of the contents of the reactor which eventually keeps the “cells” in the perfect homogenous condition for better transport of nutrients and oxygen for adequate metabolism of cell to the desired product(s). The agitator can be top driven or bottom which could be basically magnetic / mechanically driven. The bottom driven magnetic /mechanical agitators are preferred as opposed to top driven agitators as it saves adequate space on the top of the vessel for insertion of essential probes (Temperature, pH, dissolved oxygen foam, Co2 etc) or inlet ports for acid, alkali, foam, fresh media inlet /exit gases etc. However mechanical driven bottom impellers need high quality mechanical seals to prevent leakage of the broth.

Baffle – The purpose of the baffle in the reactor is to break the vortex formation in the vessel, which is usually highly undesirable as it changes the centre of gravity of the system and consumes additional power.

Sparger – In aerobic cultivation process the purpose of the sparger is to supply oxygen to the growing cells. Bubbling of air through the sparger not only provide the adequate oxygen to the growing cells but also helps in the mixing of the reactor contents thereby reducing the power consumed to achieve a particular level of (mixing) homogeneity in the culture.

Jacket – The jacket provides the annular area for circulation of constant temperature water which keeps the temperature of the bioreactor at a constant value. The desired temperature of the circulating water is maintained in a separate Chilled Water Circulator which has the provision for the maintenance of low/high temperature in a reservoir. The contact area of jacket provides adequate heat transfer area wherein desired temperature water is constantly circulated to maintain a particular temperature in the bioreactor.

 

Temperature Measurement and control – The measurement of the temperature of the bioreactor is done by a thermocouple or Pt -100 sensor which essentially sends the signal to the Temperature controller. The set point is entered in the controller which then compares the set point with the measured value and depending on the error, either the heating or cooling finger of the bioreactor is activated to slowly decrease the error and essentially bring the measured temperature value close to the set point.

pH measurement and control – The measurement of pH in the bioreactor is done by the autoclavable pH probe. The measured signal is compared with the set point in the controller unit which then activates the acid or alkali to bring the measured value close to the set point. However before the pH probe is used, it needs to be calibrated with two buffers usually in the pH range which is to be used in the bioreactor cultivation experiment. The probe is first inserted in (let us say) pH 4 buffer and the measured value is corrected by the zero knob of the controller. Thereafter the probe is put in pH 7 buffer and if needed the measured value is corrected by the asymmetry knob of the controller. The pH probe is now ready for use in the range 0-7 pH range.

Identification of pH controller control settings for Bio-Engineering AG (Switzerland) bioreactor – For this specific pH controller one has to suitably identify the right control action setting for the addition of certain concentration of acid / alkali in the desired fermentation broth which can give quick control action with-out any oscillations/offset of measured value around the set point. The controller panel and the different knobs are adequately described in the following figure –

Before the start of autoclaving of the broth for any cultivation experiment, it is essential to calibrate the pH probe (as described above). Thereafter the set point (say 5) and p-band (1.0) is entered on the controller. This essentially means that now the pH controller will control the pH value in the range 4.0 to 6.0 For example if the measured pH value in the bioreactor is 4.5 then the controller will trigger alkali addition to the reactor to bring the measured value to 5.0 and similarly if the pH value is 5.5 it activates the acid pump to bring down the pH value to the set point. It is ensured that the p-band value is not kept too small or else it may lead to oscillation of the measured value around the set point similarly if the p-band is too large then it may give rise to offset between the measured value and set point of the controller. It may also be noted that the activation of acid/alkali pumps is done in a phased manner. For example if the controller is adding alkali to bring down the pH, the addition of the alkali is not done in one shot. In fact the alkali addition pump is kept on for some time and then it is off for some time. This ensures adequate mixing of first installment of acid/alkali in the broth before next installment is added. This strategy avoids over addition of acid/alkali for the pH control. The On/Off time of the controller has to be adjusted by separate experiments and will depend on the buffering capacity of the broth, concentration of acid/alkali etc. However it is absolutely essential to identify & maintain these setting before the start of the experiment in order to have efficient control action of the control which features stable quick control action with-out oscillations &/or off set around set point. There is another knob td in the control panel which provides the setting of the safety time during which if the control action is not achieved it raises the alarm for the operator.

Dissolved oxygen controller – The dissolved oxygen in the bioreactor broth is measured by a dissolved oxygen probe which basically generates some potential corresponding to the dissolved oxygen diffused in the probe. Before the measurement can be done by the probe it is to be calibrated for its zero and hundred percent values. The zero of the probe is set by (zero knob) the measured value of the dissolved oxygen when the broth is saturated with nitrogen purging. Similarly the hundred percent of the instrument is calibrated by the measured value of dissolved oxygen when broth is saturated with purging air in it. After calibration the instrument is ready for the measurement of the dissolved oxygen in the broth. In the event of low oxygen in the fermentation broth, more oxygen can be purged in the bioreactor &/or stirrer speed can be increased to enhance the beating of the bubbles which essentially enhances the oxygen transfer area and net availability of oxygen in the fermentation broth.

Foam control – The fermentation broth contains a number of organic compounds and the broth is vigorously agitated to keep the cells in suspension and ensure efficient nutrient transfer from the dissolved nutrients and oxygen. This invariably gives rise to lot of foam. It is essential that control of the foam is done as soon as possible.

EE-Unit-V Biochips

A biochip is a collection of miniaturized test sites (microarrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to achieve higher throughput and speed. Typically, a biochip’s surface area is no larger than a fingernail. Like a computer chip that can perform millions of mathematical operations in one second, a biochip can perform thousands of biological reactions, such as decoding genes, in a few seconds.

A genetic biochip is designed to “freeze” into place the structures of many short strands of DNA (deoxyribonucleic acid), the basic chemical instruction that determines the characteristics of an organism. Effectively, it is used as a kind of “test tube” for real chemical samples. A specially designed microscope can determine where the sample hybridized with DNA strands in the biochip. Biochips helped to dramatically accelerate the identification of the estimated 80,000 genes in human DNA, an ongoing world-wide research collaboration known as the Human Genome Project. The microchip is described as a sort of “word search” function that can quickly sequence DNA.

In addition to genetic applications, the biochip is being used in toxicological, protein, and biochemical research. Biochips can also be used to rapidly detect chemical agents used in biological warfare so that defensive measures can be taken.

The notion of a cheap and reliable computer chip look-alike that performs thousands of biological reactions is very attractive to drug developers. Because these chips automate highly repetitive laboratory tasks by replacing cumbersome equipment with miniaturized, microfluidic assay chemistries, they are able to provide ultra-sensitive detection methodologies at significantly lower costs per assay than traditional methods—and in a significantly smaller amount of space.

At present, applications are primarily focused on the analysis of genetic material for defects or sequence variations. Corporate interest centers around the potential of biochips to be used either as point-of-care diagnostics or as high-throughput screening platforms for drug lead identification. The key challenge to making this industry as universally applicable as processor chips in the computer industry is the development of a standardized chip platform that can be used with a variety of “motherboard” systems to stimulate widespread application.

Historical perspective

It is important to realize that a biochip is not a single product, but rather a family of products that form a technology platform. Many developments over the past two decades have contributed to its evolution.

In a sense, the very concept of a biochip was made possible by the work of Fred Sanger and Walter Gilbert, who were awarded a Nobel Prize in 1980 for their pioneering DNA sequencing approach that is widely used today. DNA sequencing chemistry in combination with electric current, as well as micropore agarose gels, laid the foundation for considering miniaturizing molecular assays. Another Nobel-prize winning discovery, Kary Mullis’s polymerase chain reaction (PCR), first described in 1983, continued down this road by allowing researchers to amplify minute amounts of DNA to quantities where it could be detected by standard laboratory methods. A further refinement was provided by Leroy Hood’s 1986 method for fluorescence-based DNA sequencing, which facilitated the automation of reading DNA sequence.

Further developments, such as sequencing by hybridization, gene marker identification, and expressed sequence tags, provided the critical technological mass to prompt corporate efforts to develop miniaturized and automated versions of DNA sequencing and analysis to increase throughput and decrease costs. In the early and mid-1990s, companies such as Hyseq and Affymetrix were formed to develop DNA array technologies

Current state

The availability of genetic sequence information in both public and corporate databases has gradually shifted genome-based R&D away from pure sequencing for sequencing’s sake and toward gene function–oriented studies. It soon became apparent to everyone involved in genomics that gene sequence data alone was of relatively little clinical use unless it was directly linked to disease relevance. This, in turn, has driven the development of the field of pharmacogenomics—an approach that seeks to develop drugs tailored to individual genetic variation (see Pharmacogenomics, pp. 40–42).

In this regard, DNA-based biochips are at present used primarily for two types of analysis. First, they have been used successfully for the detection of mutations in specific genes as diagnostic “markers” of the onset of a particular disease. The patient donates test tissue that is processed on the array to detect disease-related mutations. The primary example of this approach is the Affymetrix GeneChip. The p53 GeneChip is designed to detect single nucleotide polymorphisms of the p53 tumor-suppressor gene; the HIV GeneChip is designed to detect mutations in the HIV-1 protease and also the virus’s reverse transcriptase genes; and the P450 GeneChip focuses on mutations of key liver enzymes that metabolize drugs. Affymetrix has additional GeneChips in development, including biochips for detecting the breast cancer gene, BRCA1, as well as identifying bacterial pathogens. Other examples of biochips used to detect gene mutations include the HyGnostics modules made by Hyseq.

A second application for DNA-based biochips is to detect the differences in gene expression levels in cells that are diseased versus those that are healthy. Understanding these differences in gene expression not only serves as a diagnostic tool, but also provides drug makers with unique targets that are present only in diseased cells. For example, during the process of cancer transformation oncogenes and proto-oncogenes are activated, which never occurs in healthy cells. Targeting these genes may lead to new therapeutic approaches. Examples of biochips designed for gene expression profile analysis include Affymetrix’s standardized GeneChips for a variety of human, murine, and yeast genes, as well as several custom designs for particular strategic collaborators; and Hyseq’s HyX Gene Discovery Modules for genes from tissues of the cardiovascular and central nervous systems, or from tissues exposed to infectious diseases.

Besides these two immediate array-based applications for this technology, a number of companies are focusing on creating the equivalent of a wet laboratory on a chip. One example is Caliper’s LabChip, which uses microfluidics technology to manipulate minute volumes of liquids on chips. Applications include chip-based PCR as well as high-throughput screening assays based on the binding of drug leads with known drug targets.

Finally, in addition to DNA and RNA-based chips, protein chips are being developed with increasing frequency. For example, a recent report describes the development of a quantitative immunoassay for prostate-specific membrane antigen (PSMA) based on a protein chip and surface-enhanced laser desorption/ionization mass spectrometry technology1.

Industry challenges

A key challenge to the biochip industry is standardization. Both the assays and the ancillary instrumentation need to be interfaced so that the data can be easily integrated into existing equipment. This is particularly important when genetic diagnostic applications are at stake, because important clinical decisions are to be based on the interpretation of gene chip readouts, and these results need to be independent of the manufacturer of the biochip.

An example of an effort to address this issue is the formation of the Genetic Analysis Technology Consortium (GATC) by Affymetrix and Molecular Dynamics2. The aim of this group is to establish an industry standard for the reading and analysis of many types of chips. In debating whether or not to join this consortium, companies are forced to decide whether their market niche will be broad use across the industry or highly customized applications in niche areas. When the decision is for the latter, it is unlikely that they will spend the time or money to standardize their product.

There are also important technical challenges for this industry that are fueling a highly competitive R&D race in order to establish market dominance. This is especially true in the “reader” technology to detect and decipher biochip readouts. Despite efforts to standardize this technology, novel platforms are being developed that promise higher throughput. One technology is that appears to have particular promise is the “optical mapping” of DNA. This method involves elongating and fixing DNA molecules onto derivatized glass slides in order to preserve their biochemical accessibility. It has the added feature of being able to maintain sequence order after enzymatic digestion. This system has shown promise for high throughput and accurate sequence analysis when integrated with appropriate detection and interpretation software3. Whether it will emerge as the system of choice, however, remains to be determined.

Finally, it is sometimes asked whether mass spectrometry can be part of next-wave biochip technology. As currently conceived biochips are essentially immobilized arrays of biomolecules, whereas mass spectrometry can determine molecular structure from ionized samples of material. Therefore, it is difficult to envisage a direct connection between the two, but perhaps in the future certain aspects of biochip analysis might be performed by mass spectrometry approaches.

Future directions

Biochip development will benefit increasingly from applications developed for other industries. For example, flame hydrolysis deposition (FHD) of glasses has many applications in the telecommunications industry, and is now also being applied toward the development of new biochips. A recent report describes how FHD was used to deposit silica with different refractive indices, resulting in microstructures that can be readily incorporated onto a chip and that integrate both optical and fluidic circuitry on the same device4.

Biochips are also continuing to evolve as a collection of assays that provide a technology platform. One interesting development in this regard is the recent effort to couple so-called representational difference analysis (RDA) with high-throughput DNA array analysis. The RDA technology allows the comparison of cDNA from two separate tissue samples simultaneously. One application is to compare tissue samples obtained from a metastatic form of cancer versus a non-metastatic one in successive rounds. A “subtracted cDNA library” is produced from this comparison which consists of the cDNA from one tissue minus that from the other. If, for example, one wants to see which genes are unique to the metastatic cancer cells, a high density DNA array can be built from this subtractive library to which fluorescently labeled probes are added to automate the detection process of the differentially expressed genes. One study using this method compared a localized versus a metastatic form of Ewing’s sarcoma and demonstrated that 90% of the genes examined had expression levels that differed between the two cancers by more than twofold5.

Another area of interest for future development is protein-based biochips. These biochips could be used to array protein substrates that could then be used for drug-lead screening or diagnostic tests. If a biosensor apparatus is built into these biochips a further application might be to measure the catalytic activity of various enzymes6. The ability to apply proteins and peptides on a wide variety of chip substrates is currently an area of intense research. The goal is to be able to control the three-dimensional patterning of these proteins on the chips through either nano-patterning on single layers or protein self assembly7.

The future will also see novel practical extensions of biochip applications that enable significant advances to occur without major new technology engineering. For example, a recent study described a novel practical system that allowed high-throughput genotyping of single nucleotide polymorphisms (SNPs) and detection of mutations by allele-specific extension on standard primer arrays. The assay is simple and robust enough to enable an increase in throughput of SNP typing in non-clinical as well as in clinical labs, with significant implications for areas such as pharmacogenomics8.

Finally, another development of protein biochips involves the use of powerful detection methodologies such as surface plasmon resonance (SPR). A recent study describes the use of SPR to detect the interaction between autoantibodies and beta2-glycoprotein I (betaa2GPI) immobilized on protein sensor chips, this interaction being correlated with lupus. SPR enabled the interaction to be detected at a very low density of protein immobilization on the chip, and this approach therefore has significant potential for the future9.

Conclusions

As this fast-maturing field already boasts sales of products, biochips are likely to have a significant business future. We can expect that advances in microfluidic biochip technology will enable the miniaturization of devices that will allow highly sensitive analysis of complex biological interactions in real time. These advances promise to transform genetic diagnostics and drug screening because of their reproducibility, low cost, and speed.

EE-Unit-V Biosensor

A biosensor is an analytical device which converts a biological response into an electrical signal (Figure 1). The term ‘biosensor’ is often used to cover sensor devices used in order to determine the concentration of substances and other parameters of biological interest even where they do not utilise a biological system directly. This very broad definition is used by some scientific journals (e.g. Biosensors, Elsevier Applied Science) but will not be applied to the coverage here. The emphasis of this Chapter concerns enzymes as the biologically responsive material, but it should be recognised that other biological systems may be utilised by biosensors, for example, whole cell metabolism, ligand binding and the antibody-antigen reaction. Biosensors represent a rapidly expanding field, at the present time, with an estimated 60% annual growth rate; the major impetus coming from the health-care industry (e.g. 6% of the western world are diabetic and would benefit from the availability of a rapid, accurate and simple biosensor for glucose) but with some pressure from other areas, such as food quality appraisal and environmental monitoring. The estimated world analytical market is about �12,000,000,000 year-1 of which 30% is in the health care area. There is clearly a vast market expansion potential as less than 0.1% of this market is currently using biosensors. Research and development in this field is wide and multidisciplinary, spanning biochemistry, bioreactor science, physical chemistry, electrochemistry, electronics and software engineering. Most of this current endeavour concerns potentiometric and amperometric biosensors and colorimetric paper enzyme strips. However, all the main transducer types are likely to be thoroughly examined, for use in biosensors, over the next few years. 

A successful biosensor must possess at least some of the following beneficial features:

  1. The biocatalyst must be highly specific for the purpose of the analyses, be stable under normal storage conditions and, except in the case of colorimetric enzyme strips and dipsticks (see later), show good stability over a large number of assays (i.e. much greater than 100).

  2. The reaction should be as independent of such physical parameters as stirring, pH and temperature as is manageable. This would allow the analysis of samples with minimal pre-treatment. If the reaction involves cofactors or coenzymes these should, preferably, also be co-immobilised with the enzyme 

  3. The response should be accurate, precise, reproducible and linear over the useful analytical range, without dilution or concentration. It should also be free from electrical noise.

  4. If the biosensor is to be used for invasive monitoring in clinical situations, the probe must be tiny and biocompatible, having no toxic or antigenic effects. If it is to be used in fermenters it should be sterilisable. This is preferably performed by autoclaving but no biosensor enzymes can presently withstand such drastic wet-heat treatment. In either case, the biosensor should not be prone to fouling or proteolysis.

  5. The complete biosensor should be cheap, small, portable and capable of being used by semi-skilled operators.

  6. There should be a market for the biosensor. There is clearly little purpose developing a biosensor if other factors (e.g. government subsidies, the continued employment of skilled analysts, or poor customer perception) encourage the use of traditional methods and discourage the decentralisation of laboratory testing.

The biological response of the biosensor is determined by the biocatalytic membrane which accomplishes the conversion of reactant to product. Immobilised enzymes possess a number of advantageous features which makes them particularly applicable for use in such systems. They may be re-used, which ensures that the same catalytic activity is present for a series of analyses. This is an important factor in securing reproducible results and avoids the pitfalls associated with the replicate pipetting of free enzyme otherwise necessary in analytical protocols. Many enzymes are intrinsically stabilised by the immobilisation process, but even where this does not occur there is usually considerable apparent stabilisation. It is normal to use an excess of the enzyme within the immobilised sensor system. This gives a catalytic redundancy (i.e. h << 1) which is sufficient to ensure an increase in the apparent stabilisation of the immobilised enzyme . Even where there is some inactivation of the immobilised enzyme over a period of time, this inactivation is usually steady and predictable. Any activity decay is easily incorporated into an analytical scheme by regularly interpolating standards between the analyses of unknown samples. For these reasons, many such immobilised enzyme systems are re-usable up to 10,000 times over a period of several months. Clearly, this results in a considerable saving in terms of the enzymes’ cost relative to the analytical usage of free soluble enzymes.

When the reaction, occurring at the immobilised enzyme membrane of a biosensor, is limited by the rate of external diffusion, the reaction process will possess a number of valuable analytical assets. In particular. It follows that the biocatalyst gives a proportional change in reaction rate in response to the reactant (substrate) concentration over a substantial linear range, several times the intrinsic Km. This is very useful as analyte concentrations are often approximately equal to the Kms of their appropriate enzymes which is roughly 10 times more concentrated than can be normally determined, without dilution, by use of the free enzyme in solution. Also following from equation is the independence of the reaction rate with respect to pH, ionic strength, temperature and inhibitors. This simply avoids the tricky problems often encountered due to the variability of real analytical samples (e.g, fermentation broth, blood and urine) and external conditions. Control of biosensor response by the external diffusion of the analyte can be encouraged by the use of permeable membranes between the enzyme and the bulk solution. The thickness of these can be varied with associated effects on the proportionality constant between the substrate concentration and the rate of reaction (i.e. increasing membrane thickness increases the unstirred layer (d) which, in turn, decreases the proportionality constant, kL, in equation . Even if total dependence on the external diffusional rate is not achieved (or achievable), any increase in the dependence of the reaction rate on external or internal diffusion will cause a reduction in the dependence on the pH, ionic strength, temperature and inhibitor concentrations.


Main components of a biosensor

Figure 1. Schematic diagram showing the main components of a biosensor. The biocatalyst (a) converts the substrate to product. This reaction is determined by the transducer (b) which converts it to an electrical signal. The output from the transducer is amplified (c), processed (d) and displayed (e).


The key part of a biosensor is the transducer (shown as the ‘black box’ in Figure 6.1) which makes use of a physical change accompanying the reaction. This may be

  1. the heat output (or absorbed) by the reaction (calorimetric biosensors),

  2. changes in the distribution of charges causing an electrical potential to be produced (potentiometric biosensors),

  3. movement of electrons produced in a redox reaction (amperometric biosensors),

  4. light output during the reaction or a light absorbance difference between the reactants and products (optical biosensors), or

  5. effects due to the mass of the reactants or products (piezo-electric biosensors).

There are three so-called ‘generations’ of biosensors; First generation biosensors where the normal product of the reaction diffuses to the transducer and causes the electrical response, second generation biosensors which involve specific ‘mediators’ between the reaction and the transducer in order to generate improved response, and third generation biosensors where the reaction itself causes the response and no product or mediator diffusion is directly involved.

The electrical signal from the transducer is often low and superimposed upon a relatively high and noisy (i.e. containing a high frequency signal component of an apparently random nature, due to electrical interference or generated within the electronic components of the transducer) baseline. The signal processing normally involves subtracting a ‘reference’ baseline signal, derived from a similar transducer without any biocatalytic membrane, from the sample signal, amplifying the resultant signal difference and electronically filtering (smoothing) out the unwanted signal noise. The relatively slow nature of the biosensor response considerably eases the problem of electrical noise filtration. The analogue signal produced at this stage may be output directly but is usually converted to a digital signal and passed to a microprocessor stage where the data is processed, converted to concentration units and output to a display device or data store.

EE-Unit V Biosurfactants

Biosurfactants are biological surface-active agents capable of reducing interfacial tension between liquids, solids and gases, thereby allowing them to mix and disperse readily in water or other liquids. (Bio)surfactants are amphiphilic molecules consisting of a hydrophilic and a hydrophobic moiety that interacts with the phase boundary in heterogeneous systems. The non-polar “tail” is typically a hydrocarbon chain whereas the polar “head” appears in many different varieties such as carbohydrates, amino acids or phosphates.

Surfactants are used for a wide variety of applications in households, industry and agriculture. They are extensively used in cleaning applications and as a formulation aid to promote solubilisation, emulsification and dispersion of other molecules in products ranging from chemicals, cosmetics, detergents, foods, textiles and pharmaceuticals. Surfactants are molecules that intervene in nearly every product and every aspect of human daily life.
In addition to their use as a formulation aid, certain surfactants can also be used as an active compound with antimicrobial, antitumor, antiviral or immunological properties or as inducers of cell differentiation. This has resulted in a number of potential applications and related developments in biomedical sciences. Also in plant protection, apart from their general use as a formulation and dispersion aid, certain surfactants are actually the active ingredient. Biosurfactants such as rhamnolipids are known to have very high and specific antimicrobial activity against the zoospores of Phytophtora, one of the most important phytopathogenic fungi.

The large majority of the currently used surfactants are petroleum-based and are produced by chemical means. These compounds are often toxic to the environment and their use may lead to significant environmental problems, particularly in washing applications as these surfactants inevitably end up in the environment after use. The eco-toxicity, bio-accumulation and biodegradability of surfactants are therefore issues of increasing concern. Biosurfactants are an alternative, as they combine good functional properties with low environmental impact and excellent skin compatibility. Moreover, biosurfactants can be produced by fermentation from renewable resources, typically from sugars and vegetable oils.

The structure of biosurfactants is predominantly determined by the producing organism, but can to a certain extent be influenced by the culture conditions. Biosurfactants can be classified in four groups based on their chemical composition: glycolipids (1), oligopeptides and lipopeptides (2), phospholipids, fatty acids and neutral lipids (3) and polymeric biosurfactants (4). In addition to those four basic groups, there also exists biosurfactants build of carbohydrates, fatty acids and peptides, and sometimes external cell components or even whole cells show surface tension lowering properties.
The most promising group of biosurfactants are the glycolipids, this group will be discussed more in detail.

 

 

 

 

A large variety of microorganisms produce potent surface-active agents, biosurfactants, which vary in their chemical properties and molecular size. While the low molecular weight surfactants are often glycolipids, the high molecular weight surfactants are generally either polyanionic heteropolysaccharides containing covalently-linked hydrophobic side chains or complexes containing both polysaccharides and proteins. The yield of the biosurfactant greatly depends on the nutritional environment of the growing organism. The enormous diversity of biosurfactants makes them an interesting group of materials for application in many areas such as agriculture, public health, food, health care, waste utilization, and environmental pollution control such as in degradation of hydrocarbons present in soil .

Biosurfactants (BS) are amphiphilic compounds produced on living surfaces, mostly microbial cell surfaces, or excreted extracellularly and contain hydrophobic and hydrophilic moieties that reduce surface tension (ST) and interfacial tensions between individual molecules at the surface and interface, respectively. Since BS and bioemulsifiers both exhibit emulsification properties, bioemulsifiers are often categorized with BS, although emulsifiers may not lower surface tension. A biosurfactant may have one of the following structures: mycolic acid, glycolipids, polysaccharide–lipid complex, lipoprotein or lipopeptide, phospholipid, or the microbial cell surface itself.

Considerable attention has been given in the past to the production of surface-active molecules of biological origin because of their potential utilization in food-processing1–3, pharmacology, and oil industry. Although the type and amount of the microbial surfactants produced depend primarily on the producer organism, factors like carbon and nitrogen, trace elements, temperature, and aeration also affect their production by the organism.

Hydrophobic pollutants present in petroleum hydrocarbons, and soil and water environment require solubilization before being degraded by microbial cells. Mineralization is governed by desorption of hydrocarbons from soil. Surfactants can increase the surface area of hydrophobic materials, such as pesticides in soil and water environment, thereby increasing their water solubility. Hence, the presence of surfactants may increase microbial degradation of pollutants. Use of biosurfactants for degradation of pesticides in soil and water environment has gained importance only recently. The identification and characterization of biosurfactant produced by various microorganisms have been extensively reviewed4–6. Therefore, rather than describing the numerous types of biosurfactants and their properties, this article emphasizes the production of biosurfactants and their role in biodegradation of pesticides.

 

Microbiology

Microorganisms utilize a variety of organic compounds as the source of carbon and energy for their growth. When the carbon source is an insoluble substrate like a hydrocarbon (CxHy), microorganisms facilitate their diffusion into the cell by producing a variety of substances, the biosurfactants. Some bacteria and yeasts excrete ionic surfactants which emulsify the CxHy substrate in the growth medium. Some examples of this group of BS are rhamnolipids which are produced by different Pseudomonas sp.7–11, or the sophorolipids which are produced by several Torulopsis sp.12–14. Some other microorganisms are capable of changing the structure of their cell wall, which they achieve by synthesizing lipopolysaccharides or nonionic surfactants in their cell wall. Examples of this group are: Candida lipolytica and C. tropicalis which produce cell wall-bound lipopolysaccharides when growing on n-alkanes15,16; and Rhodococcus erythropolis, and many Mycobacterium sp. and Arthrobacter sp. which synthesize nonionic trehalose corynomycolates14,17–23. There are lipopolysaccharides, such as Emulsan, synthesized by Acinetobacter sp.22,23, and lipoproteins or lipopeptides, such as Surfactin and Subtilisin, produced by Bacillus subtilis24–26. Other effective BS are: (i) Mycolates and Corynomycolates which are produced by Rhodococcus sp., Corynebacteria sp.,Mycobacteria sp., and Nocardia sp.24,27,28; and (ii) ornithinlipids, which are produced by Pseudomonas rubescens, Gluconobacter cerinus, and Thiobacillus ferroxidans29–31. BS produced by various microorganisms together with their properties are listed in Table 1.

Classification and chemical nature of biosurfactants

The microbial surfactants (MS) are complex molecules covering a wide range of chemical types including peptides, fatty acids, phospholipids, glycolipids, antibiotics, lipopeptides, etc. Microorganisms also produce surfactants that are in some cases combination of many chemical types: referred to as the polymeric microbial surfactants (PMS). Many MS have been purified and their structures elucidated. While the high molecular weight MS are generally polyanionic heteropolysaccharides containing both polysaccharides and proteins, the low molecular weight MS are often glycolipids. The yield of MS varies with the nutritional environment of the growing microorganism. Intact microbial cells that have high cell surface hydrophobicity are themselves surfactants. In some cases, surfactants themselves play a natural role in growth of microbial cells on water-insoluble sub-
strates like CxHy, sulphur, etc. Exocellular surfactants are involved in cell adhesion, emulsification, dispersion, flocculation, cell aggregation, and desorption phenomena. A broad classification of BS is given in Table 2. A very brief description of each group is given below.

Glycolipids

117.gif (52977 bytes)


Glycolipids are the most common types of BS (ref. 32). The constituent mono-, di-, tri- and tetrasaccharides include glucose, mannose, galactose, glucuronic acid, rhamnose, and galactose sulphate. The fatty acid component usually has a composition similar to that of the phospholipids of the same microorganism. The glycolipids can be categorized as:

Trehalose lipids: The serpentine growth seen in many members of the genus Mycobacterium is due to the presence of trehalose esters on the cell surface33,34. Cord factors from different species of Mycobacteria33,35–37, Corynebacteria38, Nocardia, and Brevibacteria differ in size and structure of the mycolic acid esters.

 

Sophorolipids: These are produced by different strains of the yeast, Torulopsis. The sugar unit is the disaccharide sophorose which consists of two b -1,2-linked glucose units. The 6 and 6¢ hydroxy groups are generally acetylated. The sophorolipids reduce surface tensions between individual molecules at the surface, although they are effective emulsifying agents13,39,40. The sophorolipids of Torulopsis have been reported to stimulate41,42, inhibit41,43, and have no effect8 on growth of yeast on water-insoluble substrates.

 

Rhamnolipids: Some Pseudomonas sp. produce large quantities of a glycolipid consisting of two molecules of rhamnose and two molecules of b -hydroxydecanoic acid44,45. While the OH group of one of the acids is involved in glycosidic linkage with the reducing end of the rhamnose disaccharide, the OH group of the second acids is involved in ester formation. Since one of the carboxylic acid is free, the rhamnolipids are anions above pH 4.0. Rhamnolipids are reported46 to lower surface tension, emulsify CxHy, and stimulate growth of Pseudomonas on n-hexadecane. Formation of rhamnolipids by Pseudomonas sp. MVB was greatly increased by nitrogen limitations47. The pure rhamnolipid lowered the interfacial tension against n-hexadecane in water to about 1 mN/m and had a critical micellar concentration (cmc) of 10 to 30 mg/l depending on the pH and salt conditions48.

 

Fatty acids

The fatty acids produced from alkanes by microbial oxidations have received maximum attention as surfactants49. Besides the straight-chain acids, microorganisms produce complex fatty acids containing OH groups and alkyl branches. Some of these complex
acids, for example corynomucolic acids, are surfactants24,28,50.

 

Phospholipids

These are major components of microbial membranes. When certain CxHy-degrading bacteria51–53 or yeast54–56 are grown on alkane substrates, the level of phospholipids increases greatly. Phospholipids from hexadecane-grown Acinetobacter sp. have potent surfactant properties. Phospholipids produced by Thiobacillus thiooxidans have been reported to be responsible for wetting elemental sulphur, which is necessary for growth57,58.

 

Surface active antibiotics

Gramicidin S: Many bacteria produce a cyclosymmetric decapeptide antibiotic, gramicidin S. Spore preparations of Brevibacterium brevis contain large amounts of gramicidin S bound strongly to the outer surface of the spores59,60. Mutants lacking gramicidin S germinate rapidly and do not have a lipophilic surface61. The antibacterial activity of gramicidin S is due to its high surface activity62–65.

 

 

Polymixins: These are a group of antibiotics produced by Brevibacterium polymyxa and related bacilli66. Polymixin B is a decapeptide in which amino acids 3 through 10 form a cyclic octapeptide. A branched chain fatty acid is connected to the terminal 2,4-diaminobutyric acid (DAB). Polymixins are able to solubilize certain membrane enzymes67.

 

Surfactin (subtilysin): One of the most active biosurfactants produced by B. subtilis is a cyclic lipopeptide surfactin26,68. The yield of surfactin produced by
B. subtilis can be improved to around 0.8 g/l by continuously removing the surfactant by foam fractionation and addition of either iron or manganese salts to the growth medium24.

 

Antibiotic TA: Myxococcus xanthus produces antibiotic TA which inhibits peptidoglycan synthesis by interfering with polymerization of the lipid disaccharide pentapeptide69. Antibiotic TA has interesting chemotherapeutic applications70.

 

Polymeric microbial surfactants

Most of these are polymeric heterosaccharide containing proteins.

 

Acinetobacter calcoaceticus RAG-1 (ATCC 31012) emulsan: A bacterium, RAG-1, was isolated during an investigation of a factor that limited the degradation of crude oil in sea water. This bacterium efficiently emulsified CxHy in water71. This bacterium, Acinetobacter calcoaceticus, was later successfully used to clear a cargo compartment of an oil tanker during its ballast voyage22,72. The cleaning phenomenon was due to the production of an extracellular, high molecular weight emulsifying factor22, emulsan.

 

The polysaccharide protein complex of Acinetobacter calcoaceticus BD413: A mutant of A. calcoaceticus BD4, excreted large amounts of polysaccharide together with proteins. The emulsifying activity required the presence of both polysaccharide and proteins73,74.

 

Other Acinetobacter emulsifiers: Extracellular emulsifier production is widespread in the genus Acinetobacter. In one survey75, 8 to 16 strains of A. calcaoceticus produced high amounts of emulsifier following growth on ethanol medium76,77. This extracellular fraction was extremely active in breaking (de-emulsifying) kerosene/ water emulsion stabilized by a mixture of Tween 60 and Span 60.

 

Polysaccharide-lipid complexes from yeast: The partially purified emulsifier, liposan, was reported to contain about 95% carbohydrate and 5% protein78. A CxHy-degrading yeast, Endomycopsis lipolytica YM, produced an unstable alkane-solubilizing factor79. Torulopsis petrophilum produced different types of surfactants depending on the growth medium39. On water-insoluble substrates, the yeast produced glycolipids which were incapable of stabilizing emulsions. When glucose was the substrate, the yeast produced a potent emulsifier.

 

Emulsifying protein (PA) from Pseudomonas aeruginosa: The bacterium P. aeruginosa has been observed to excrete a protein emulsifier. This protein PA is produced from long-chain n-alkanes, 1-hexadecane, and acetyl alcohol substrates; but not from glucose, glycerol or palmitic acid. The protein has a MW of 14,000 Da and is rich in serine and threonine80.

 

Surfactants from Pseudomonas PG-1: Pseudomonas PG-1 is an extremely efficient hydrocarbon-solubilizing bacterium. It utilizes a wide range of CxHy including gaseous volatile and liquid alkanes, alkenes, pristane, and alkyl benzenes79,81,82.

 

Bioflocculant and emulcyan from the filamentous Cyanobacterium phormidium J-1: The change in cell surface hydrophobicity of Cyanobacterium phormidium was correlated with the production of an emulsifying agent, emulcyan85. The partially purified emulcyan has a MW greater than 10,000 Da and contains carbohydrate, protein and fatty acid esters. Addition of emulcyan to adherent hydrophobic cells resulted in their becomeing hydrophilic and detach from hexadecane droplets or phenyl sepharose beads.

 

Particulate surfactants

Extracellular vesicles from Acinetobacter sp. H01-N: Acinetobacter sp. when grown on hexadecane, accumulated extracellular vesicles of 20 to 50 mm diameter with a buoyant density of 1.158 g/cm3. These vesicles appear to play a role in the uptake of alkanes by Acinetobacter sp. HO1-N. (refs 57, 84).

 

Microbial cells with high cell surface hydrophobicities: Most hydrocarbon-degrading microorganisms, many nonhydrocarbon degraders, some species of Cyanobacteria85, and some pathogens have a strong affinity for hydrocarbon-water70 and air-water86,87 interfaces. In such cases, the microbial cell itself is a surfactant.

 

Factors affecting biosurfactant production

Biosurfactants (BS) are amphiphilic compounds. They contain a hydrophobic and hydrophilic moiety. The polar moiety can be a carbohydrate, an amino acid, a phosphate group, or some other compound. The nonpolar moiety is mostly a long-carbon-chain fatty acid. Although the various BS possess different structures, there are some general phenomena concerning their biosynthesis. For example, BS production can be induced by hydrocarbons or other water-insoluble substrates88. This effect, described by different authors, refers to many of the interfacially active compounds. Another striking phenomena is the catabolic repression of BS synthesis by glucose and other primary metabolites. For example, in the case of Arthrobacter paraffineus, no surface-active agent could be isolated from the medium when glucose was used as the carbon source instead of hexadecane89. Similarly, a protein-like activator for n-alkane oxidation was formed by P. aeruginosa S7B1 from hydrocarbon, but not from glucose, glycerol, or palmitic acid80,81. Torulopsis petrophilum did not produce any glycolipids when grown on a single-phase medium that contained water-soluble carbon source13. When glycerol was used as substrate, rhamnolipid production by P. aeruginosa was sharply reduced by adding glucose, acetate, succinate or citrate to the medium8,10.

Olive oil mill effluent, a major pollutant of the agricultural industry in mediterranian countries, has been used as raw material for rhamnolipid biosurfactant production byPseudomonas sp. JAMM. Many microorganisms are known to synthesize different types of biosurfactants when grown on several carbon sources6,90. However, there have been examples of the use of a water-soluble substrate for biosurfactant production by microorganisms91,92. The type, quality and quantity of biosurfactant produced are influenced by the nature of the carbon substrate93, the concentration of N, P, Mg, Fe, and Mn ions in the medium9,24,94,95, and the culture conditions, such as pH, temperature, agitation and dilution rate in continous culture9,95–97.

Biosurfactant production from Pseudomonas strains MEOR 171 and MEOR 172 are not affected by temperature, pH, and Ca, Mg, concentration in the ranges found in many oil reserviors. Their production, on the other hand, in many cases improves with increased salinity. Thus, they are the biosurfactants of choice for the Venezuelan oil industry and in the cosmetics, food, and pharmaceutical markets.

The nitrogen source can be an important key to the regulation of BS synthesis. Arthrobacter paraffineus ATCC 19558 preferred ammonium to nitrate as inorganic nitrogen source for BS production. Urea also result in increased BS production89. A change in growth rate of the concerned microorganisms is often sufficient to result in over production of BS (ref. 27). In some cases24, addition of multivalent cations to the culture medium can have a positive effect on BS production. Besides the regulation of BS by chemicals indicated above, some compounds like ethambutol20,98, penicillin99, chloramphenicol23, and EDTA79,100 influenced the formation of interfacially active compounds. The regulation of BS production by these compounds is either through their effect on solubilization of nonpolar hydrocarbon substrates or by increased production of water-soluble (polar) substrates. In some cases, BS synthesis is regulated by pH and temperature. For example in rhamnolipid production by Pseudomonas sp.101,102, in cellobioselipid formation by Ustilago maydis103, and in sophorolipid formation by Torulopsis bombicola42,  pH played an important role, and in the case of Arthrobacter paraffineus ATCC 19558 (ref. 104), Rhodococcus crythropolis101,102, and Pseudomonas sp. DSM 2874 (refs 47, 102) temperature was important. In all these cases however the yield of BS production was temperature dependent.

 

Applications of biosurfactants in pollution control

The identification and characterization of microbial surfactants produced by various microorganisms have been extensively reviewed6,88,105–107. Therefore rather than describing numeric types of MS, it is proposed to examine potential applications of MS.

Microbial enhanced oil recovery

An area of considerable potential for BS application is microbial enhanced oil recovery (MEOR). In MEOR, microorganisms in reservoir are stimulated to produce polymers and surfactants which aid MEOR by lowering interfacial tension at the oil–rock interface. To produce MS in situ, microorganisms in the reservoir are usually provided with low-cost substrates, such as molasses and inorganic nutrients, to promote growth and surfactant production. To be useful for MEOR in situ, bacteria must be able to grow under extreme conditions encountered in oil reservoirs such as high temperature, pressure, salinity, and low oxygen level. Several aerobic and anaerobic thermophiles tolerant of pressure and moderate salinity have been isolated which are able to mobilize crude oil in the laboratory108,109. Clark et al.110, based on a computer search estimated that about 27% of oil reservoirs in USA are amenable to microbial growth and MEOR. The effectiveness of MEOR has been reported in field studies carried out in US, Czechoslovakia, Romania, USSR, Hungary, Poland, and The Netherlands. Significant increase in oil recovery was noted in some cases111.

Hydrocarbon degradation

Hydrocarbon-utilizing microorganisms excrete a variety of biosurfactants. BS being natural products, are biodegradable and consequently environmentally safe. An important group of BS is mycolic acids which are the a -alkyl, b -hydroxy very long-chain fatty acids contributing to some characteristic properties of a cell such as acid fastness, hydrophobicity, adherability, and pathogenicity. Enriching waters and soils with long- and short-chain mycolic acids may be potentially hazardous. Daffe et al.112 reported trehalose polyphthienoylates as a specific glycolipid in virulent strains of Mycobacterium tuberculosis. Kaneda et al.113 reported that granuloma formation and hemopoiesis could be induced by C36–C48 mycolic acid-containing glycolipids from Nocardia rubra. Biolid extract (BE), obtained as a byproduct during the production of fodder yeast, is a dark brown heavy fluid with a characteristic odour and high interfacial activity. This product has many applications in agrochemistry, mineral flotation, and bitumen production and processing. Potentially, the product may be used as an emulsifying and dispersing agent while formulating herbicides, pesticides, and growth regulator preparations. Including phospholipids in formulations, facilitate penetration of active substances into the plant tissues114, making it possible to apply only very low concentrations of the substances115. The constituent fatty acids of biolipid extract have antiphytoviral and antifungal activities and therefore, can be applied in controlling plant diseases116. These fatty acids also increase stress tolerance of plants, leading thereby to higher yields despite physiological drought117.

 

Hydrocarbon degradation in the soil environment

CxHy degradation in soil has been extensively studied31,95,118–122. Degradation is dependent on presence in soil of hydrocarbon-degrading species of microorganisms, hydrocarbon composition, oxygen availability, water, temperature, pH, and inorganic nutrients. The physical state of CxHy can also affect biodegradation. Addition of synthetic surfactants or MS resulted in increased mobility and solubility of CxHy, which is essential for effective microbial degradation122.

Use of MS in CxHy degradation has produced variable results. In the work of Lindley and Heydeman123, the fungus Cladosporium resiuae, grown on alkane mixtures, produced extracellular fatty acids and phospholipids, mainly dodecanoic acid and phosphatidylcholine. Supplement of the growth medium with phosphatidylcholine enhanced the alkane degradation rate by 30%. Foght et al.124 reported that the emulsifier, Emulsan, stimulated aromatic mineralization by pure bacterial cultures, but inhibited the degradation process when mixed cultures were used. Oberbremer and Muller-Harting125 used mixed soil population to assess CxHy degradation in model oil. Naphthalene was utilized in the first phase of CxHy degradation; other oil components were degraded during the second phase after the surfactants produced by concerned microorganisms lowered the interfacial tension. Addition of biosurfactants, such as some sophorolipids, increased both the extent of degradation and final biomass yield126.

Biodetox (Germany) described a process to decontaminate soils, industrial sludges, and waste waters127. They also described in situ bioreclamation of contaminated surface, deep ground and ground water. Microorganisms were added by means of a biodetox foam that contained bacteria, nutrients and surfactants; and was biodegradable. Another method to remove oil contaminants is to add BS into contaminated soil to increase CxHy mobility. The emulsified CxHy could then be recovered by using a production well, and subsequently degrading above ground in a bioreactor. In situ washing of soil was studied using two synthetic surfactants, Adsee 799 and Hyonic NP-90 (ref. 128). Removal of PCBs and petroleum CxHy from soil by adding surfactants to the wash water, has met with some success129.

Several strains of anaerobic bacteria produce biosurfactants130,131. However, the observed reduction in surface tension (45 to 50 mN/m) was not as large as the observed reduction in surface tension by anaerobic organisms (27 to 50 mN/m) (ref. 106). MS can also be used to enhance solubilization of toxic organic chemicals including xenobiotics. Berg et al.132, using the surfactant from Pseudomonas aeruginosa UG2, reported an increase in the solubility of hexachlorobiphenyl added to soil slurries, which resulted in a 31% recovery of the compound in the aqueous phase. This was about 3-times higher than that solubilized by the chemical surfactant sodium ligninsulfonate (9.3%). When theP. aeruginosa bioemulsifier and sodium ligninsulphonate were used together, additive effect on solubilization (41.5%) was observed. Pseudomonas ceparia AC 1100 produced an emulsifier that formed a stable suspension with 2,4,5-T, and also exhibited some emulsifying activity against chlorophenols133. Thus, this emulsifier can be used
to enhance bacterial degradation of organochlorine compounds.

Hydrocarbon degradation in aquatic environment

When oil is spilled in aquatic environment, the lighter hydrocarbon components volatilize while the polar hydrocarbon components dissolve in water. However, because of low solubility (< 1 ppm) of oil, most of the oil components will remain on the water surface. The primary means of hydrocarbon removal are photooxidation, evaporation, and microbial degradation. Since CxHy-degrading organisms are present in seawater, biodegradation may be one of the most efficient methods of removing pollutants95, 134. Surfactants enhance degradation by dispersing and emulsifying hydrocarbons. Microorganisms that are able to degrade CxHy have been isolated from aquatic environment. These microorganisms which exhibit emulsifying activity as well as the soil microorganisms which produced surfactants may be useful in aquatic environment. Chakrabarty136reported that an emulsifier produced by P. aeruginosa SB30 was able to quickly disperse oil into fine droplets; therefore it may be useful in removing oil from contaminated beaches135. BS produced by oil-degrading bacteria may be useful in cleaning oil tanks. When an oil tanker compartment containing oily ballast water was supplemented with urea and K2HPO4 and aerated for 4 days, the tanker was completely free of the thick layer of sludge that remained in the control tanker137. Presumably this was owing to the surfactant produced, when growth of the natural bacterial population was enhanced.

Surfactants have been studied for their use in reducing viscosity of heavy oils, thereby facilitating recovery, transportation, and pipelining138,139. Emulsan, a high MW lipopolysacharide produced by A. calcaoceticus RAG-1, has been proposed for a number of applications in the petroleum industry such as to clean oil and sludge from barges and tanks, reduce viscosity of heavy oils, enhance oil recovery, and stabilize water-in-oil emulsions in fuels140,141. Specific solubilization of various CxHy types during growth of prokaryotic organism was demonstrated by Reddy et al.79,81. The specific solubilization of CxHy was strongly inhibited by EDTA which was overcome by excess Ca++. It was concluded that specific solubilization of CxHy is an important mechanism in the microbial uptake of CxHy.

 

Pesticide-specific biosurfactants

Due to biodegradative property of biosurfactants, they are ideally suited for environmental applications, specially for removal of the pesticides—an important step in bioremediation. Survey of the literature reveals that application of biosurfactants in the field of pesticides is still in its infancy compared to the field of hydrocarbons. In India, a number of laboratories have initiated studies on BS. Some of the earlier works are by: (i) Banarjee et al.133 on 2,4,5-tricholoacetic acid, (ii) Patel and Gopinath on Fenthion142, and (iii) Anu Appaiah and Karanth143 on alpha HCH. Very recently reports on production of microbial BS, based on preliminary studies by several groups, have appeared in posters/proceedings of symposia144–148. The noteworthy feature being the increasing interest shown by the various researchers on: (i) degradation of pesticides149–152, (ii) production and exploitation of BS for the removal of pesticides from the environment, and (iii) postulates on the possible replacement of synthetic surfactants with the biosurfactants in the pesticide formulation and clean-up153–156.

 

Biosurfactant and HCH degradation

Hexa-chlorocyclohexane (HCH) is still the highest ranking pesticide used in India and many other countries. Of the eight known isomers of HCH, the alpha-form constitutes more than 70% of the technical product, which is not only noninsecticidal but also a suspected carcinogen. The use of technical HCH, which is a mixture of isomers, will continue in the Indian market because of their all-time availability with good insecticidal efficiency and at a price which is 10–12 times less than that of the pure gamma HCH (Lindane). It is pertinent to note that the environment burden of already-dumped HCH continues to pose threat to all forms of ‘life’. The poor solubility is one of the limiting factors in the microbial degradation of alpha-HCH. Presence of six chlorines in the molecule is another factor that renders HCH lipophilic and persistent in the biosphere.

Even though several reports are available on biodegradation of specific isomers of HCH in animals, plants, soil and microbial systems, literature on metabolism of alpha-HCH by microorganisms is limited. Furthermore, the exact mechnism of translocation of HCH to the site of destruction and degradation of alpha-HCH in bacteria is not well understood.

During the course of our work at CFTRI on the bacterial degradation of alpha-HCH, we isolated several bacterial strains capable of degrading HCH. One of the strains efficient in HCH degradation was characterized as Pseudomonas Ptm+ strain. The CFTRI isolate produced extracellular biosurfactant in a mineral medium containing HCH. While this BS emulsified the solid organochlorine-HCH to a higher extent, it emulsified other organochlorines such as DDT and cyclodienes to a lesser extent156, implying thereby the specificity of the BS in dispersing HCH. It was also demonstrated that the peak in production of the emulsifier appeared before the onset of HCH degradation by thePseudomonas growing in liquid culture. The role of biosurfactant in the HCH degradation was ascertained using partially purified BS. The extracellular BS was a macro-molecule containing lipid, carbohydrate, and protein moieties. The carbohydrate part was identified as rhamnose by different analytical methods. The rhamnose part of the BS was stable and was necessary for the BS activity. Careful investigations revealed that the protein fraction represented the proximal enzymes of HCH metabolism. In the presence of BS, HCH was converted through the involvement of isomerase and dechlorinase to tertachlorohexenes and then to chlorophenols157.

122a.jpg (4280 bytes)

122b.gif (27739 bytes)

The BS acted by increasing the surface area of HCH, which accelerated this transformation. Hence, it is evident that extracellular BS has a definite role in HCH degradation by CFTRI strain of Pseudomonas Ptm+. Production of BS for Fenthion, a liqiud OP insecticide, has also received attention. Bacillus subtilis excreted the BS both in liqiud as well as in solid state fermentation system146,147. The microbial surfactant produced by these two organisms also shows properties of a good cleansing agent for dislodging the pesticides from used containers, mixing tanks, cargo docks, etc. Attempts have also been made to standardize parameters for BS production both in liquid and solid state fermentations. A limited number of scale-up studies indicate good scope for expolitation of BS in industries.

In a separate study, it has been shown that addition of BS from Pseudomonas Ptm+ strain facilitied 250-fold increase in dispersion of HCH in water. Addition of either this organism or BS dislodged surface-borne HCH residues from many types of fruits, seeds and vegetables158 as well. Laboratory-scale studies have revealed that BS is very efficient in cleaning the containers where HCH residues were sticking to the wall (Figure 1). Studies using fermentor for large-scale production of this BS from Pseudomonas Ptm+ have been carried out159. A bioformulation is planned from this BS for effective removal of HCH from contaminated soils.

123a.gif (11180 bytes)

123b.gif (84335 bytes)

Other applications

By virtue of properties of biodegradability, substrate specificity, chemical and functional diversity, and rapid/ controlled inactivation, biosurfactants are gaining importance in various industires like agriculture, food, textiles, petrochemicals, etc. The potential applications of biosurfactants having desired functions and properties are listed in Table 394,160,161. The current consumption rate and estimated demand pattern for synthetic surfactants are shown in Table 4. Number of patents available on the subject are given in Table 5.

BS from some other bacterial taxa may be of public health concern. Methylrhamnolipids from Pseudomonas aeroginosa have cytotoxic effects163. Lipopolyglycans from mycoplasmas show endotoxic properties, potentially inducing procoagulant activity in human leukocytes164. The toxicity and antigenic properties of mycobacterial glycolipids, produced by pathogenic mycobacteria such as M. avium-intracellure, M. scrofulaceum, and M. fortulitum, which are habitats of water polluted with industrial and domestic residues, are well known165,166. The varied uses of BS also imply scope for MS, and the need to strengthen the research in this emerging area.

EE-Unit-V Biofertilizers

Biofertilizers are defined as preparations containing living cells or latent cells of efficient strains of microorganisms that help crop plants’ uptake of nutrients by their interactions in the rhizosphere when applied through seed or soil.  They accelerate certain microbial processes in the soil which augment the extent of availability of nutrients in a form easily assimilated by plants.

Very often microorganisms are not as efficient in natural surroundings as one would expect them to be and therefore artificially multiplied cultures of efficient selected microorganisms play a vital role in accelerating the microbial processes in soil.

Use of biofertilizers is one of the important components of integrated nutrient management, as they are cost effective and renewable source of plant nutrients to supplement the chemical fertilizers for sustainable agriculture. Several microorganisms and their association with crop plants are being exploited in the production of biofertilizers. They can be grouped in different ways based on their nature and function.

S. No. Groups Examples
N2  fixing Biofertilizers
1. Free-living Azotobacter, Beijerinkia, Clostridium, Klebsiella, Anabaena, Nostoc, 
2. Symbiotic Rhizobium, Frankia, Anabaena azollae
3. Associative Symbiotic Azospirillum
P Solubilizing Biofertilizers
1. Bacteria Bacillus megaterium var. phosphaticum, Bacillus subtilis
Bacillus circulans, Pseudomonas striata
2. Fungi Penicillium sp, Aspergillus awamori
P Mobilizing Biofertilizers
1. Arbuscular mycorrhiza Glomus sp.,Gigaspora sp.,Acaulospora sp.,
Scutellospora sp. & Sclerocystis sp.
2. Ectomycorrhiza Laccaria sp., Pisolithus sp., Boletus sp., Amanita sp.
3. Ericoid mycorrhizae Pezizella ericae
4. Orchid mycorrhiza Rhizoctonia solani
Biofertilizers for Micro nutrients
1. Silicate and Zinc solubilizers Bacillus sp.
Plant Growth Promoting Rhizobacteria
1. Pseudomonas Pseudomonas fluorescens

2. Different types of biofertilizers

Rhizobium

Rhizobium is a soil habitat bacterium, which can able to colonize the legume roots and fixes the atmospheric nitrogen symbiotically. The morphology and physiology of Rhizobium will vary from free-living condition to the bacteroid of nodules. They are the most efficient biofertilizer as per the quantity of nitrogen fixed concerned. They have seven genera and highly specific to form nodule in legumes, referred as cross inoculation group.

Rhizobium inoculant was first made in USA and commercialized by private enterprise in 1930s and the strange situation at that time has been chronicled by Fred (1932).

Initially, due to absence of efficient bradyrhizobial strains in soil, soybean inoculation at that time resulted in bumper crops but incessant inoculation during the last four decades by US farmers has resulted in the build up of a plethora of inefficient strains in soil whose replacement by efficient strains of bradyrhizobia has become an insurmountable problem.

Azotobacter

Of the several species of Azotobacter, A. chroococcum happens to be the dominant inhabitant in arable soils capable of fixing N2 (2-15 mg N2 fixed /g of carbon source) in culture media.

The bacterium produces abundant slime which helps in soil aggregation. The numbers of A. chroococcum in Indian soils rarely exceeds 105/g soil due to lack of organic matter and the presence of antagonistic microorganisms in soil.

Azospirillum

Azospirillum lipoferum and A. brasilense (Spirillum lipoferum in earlier literature) are primary inhabitants of soil, the rhizosphere and intercellular spaces of root cortex of graminaceous plants. They perform the associative symbiotic relation with the graminaceous plants.

The bacteria of Genus Azospirillum are  N2 fixing organisms isolated from the root and above ground parts of a variety of crop plants. They are Gram negative, Vibrio or Spirillum having abundant accumulation of polybetahydroxybutyrate (70 %) in cytoplasm.

Five species of Azospirillum have been described to date A. brasilense, A.lipoferum, A.amazonense, A.halopraeferens and A.irakense.  The organism proliferates under both anaerobic and aerobic conditions but it is preferentially micro-aerophilic in the presence or absence of combined nitrogen in the medium.

Apart from nitrogen fixation, growth promoting substance production (IAA), disease resistance and drought tolerance are some of the additional benefits due to Azospirillum inoculation.

Cyanobacteria

Both free-living as well as symbiotic cyanobacteria (blue green algae) have been harnessed in rice cultivation in India. A composite culture of BGA having heterocystous Nostoc, Anabaena, Aulosira etc. is given as primary inoculum in trays, polythene lined pots and later mass multiplied in the field for application as soil based flakes to the rice growing field at the rate of 10 kg/ha. The final product is not free from extraneous contaminants and not very often monitored for checking the presence of desiredalgal flora.

Once so much publicized as a biofertilizer for the rice crop, it has not presently attracted the attention of rice growers all over India except pockets in the Southern States, notably Tamil Nadu. The benefits due to algalization could be to the extent of 20-30 kg N/ha under ideal conditions but the labour oriented methodology for the preparation of BGA biofertilizer is in itself a limitation. Quality control measures are not usually followed except perhaps for random checking for the presence of desired species qualitatively.

Azolla

Azolla is a free-floating water fern that floats in water and fixes atmospheric nitrogen in association with nitrogen fixing blue green alga Anabaena azollae. Azolla fronds consist of sporophyte with a floating rhizome and small overlapping bi-lobed leaves and roots. Rice growing areas in South East Asia and other third World countries have recently been evincing increased interest in the use of the symbiotic N2 fixing water fern Azolla either as an alternate nitrogen sources or as a supplement to commercial nitrogen fertilizers. Azolla is used as biofertilizer for wetland rice and it is known to contribute 40-60 kg N/ha per rice crop.

Phosphate solubilizing microorganisms(PSM)

Several soil bacteria and fungi, notably species of Pseudomonas, Bacillus, Penicillium, Aspergillus etc. secrete organic acids and lower the pH in their vicinity to bring about dissolution of bound phosphates in soil. Increased yields of wheat and potato were demonstrated due to inoculation of peat based cultures of Bacillus polymyxa and Pseudomonas striata. Currently, phosphate solubilizers are manufactured by agricultural universities and some private enterprises and sold to farmers through governmental agencies. These appear to be no check on either the quality of the inoculants marketed in India or the establishment of the desired organisms in the rhizosphere.

AM fungi

The transfer of nutrients mainly phosphorus and also zinc and sulphur from the soil milleu to the cells of the root cortex is mediated by intracellular obligate fungal endosymbionts of the genera Glomus, Gigaspora, Acaulospora, Sclerocysts and Endogone which possess vesicles for storage of nutrients and arbuscles for funneling these nutrients into the root system. By far, the commonest genus appears to be Glomus, which has several species distributed in soil.

Availability for pure cultures of AM (Arbuscular Mycorrhiza) fungi is an impediment in large scale production despite the fact that beneficial effects of AM fungal inoculation to plants have been repeatedly shown under experimental conditions in the laboratory especially in conjunction with other nitrogen fixers.

                                          
Silicate solubilizing bacteria (SSB)

Microorganisms are capable of degrading silicates and aluminum silicates. During the metabolism of microbes several organic acids are produced and these have a dual role in silicate weathering. They supply H+ ions to the medium and promote hydrolysis and the organic acids like citric, oxalic acid, Keto acids and hydroxy carbolic acids which from complexes with cations, promote their removal and retention in the medium in a dissolved state.

The studies conducted with a Bacillus sp. isolated from the soil of granite crusher yard showed that the bacterium is capable of dissolving several silicate minerals under in vitro condition. The examination of anthrpogenic materials like cement, agro inputs like super phosphate and rock phosphate exhibited silicate solubilizing bacteria to a varying degree. The bacterial isolates made from different locations had varying degree of silicate solubilizing potential. Soil inoculation studies with selected isolate with red soil, clay soil, sand and hilly soil showed that the organisms multiplied in all types of soil and released more of silica and the available silica increased in soil and water. Rice responded well to application of organic sliceous residue like rice straw, rice husk and black ash @ 5 t/ha. Combining SSB with these residues further resulted in increased plant growth and grain yield. This enhancement is due to increased dissolution of silica and nutrients from the soil.

Plant Growth Promoting Rhizobacteria (PGPR)

The group of bacteria that colonize roots or rhizosphere soil and beneficial to crops are referred to as plant growth promoting rhizobacteria (PGPR).

The PGPR inoculants currently commercialized that seem to promote growth through at least one mechanism; suppression of plant disease (termed Bioprotectants), improved nutrient acquisition (termed Biofertilizers), or phytohormone production (termed Biostimulants). Species of Pseudomonas and Bacillus can produce as yet not well characterized phytohormones or growth regulators that cause crops to have greater amounts of fine roots which have the effect of increasing the absorptive surface of plant roots for uptake of water and nutrients. These PGPR are referred to as Biostimulants and the phytohormones they produce include indole-acetic acid, cytokinins, gibberellins and inhibitors of ethylene production.

Recent advances in molecular techniques also are encouraging in that tools are becoming available to determine the mechanism by which crop performance is improved using PGPR and track survival and activity of PGPR organisms in soil and roots. The science of PGPR is at the stage where genetically modified PGPR can be produced. PGPR with antibiotic, phytohormone and siderophore production can be made.

Despite of promising results, biofertilizers has not got widespread application in agriculture mainly because of the variable response of plant species or genotypes to inoculation depending on the bacterial strain used. Differential rhizosphere effect of crops in harbouring a target strain or even the modulation of the bacterial nitrogen fixing and phosphate solubilizing capacity by specific root exudates may account for the observed differences. On the other hand, good competitive ability and high saprophytic competence are the major factors determining the success of a bacterial strain as an inoculant.

Studies to know the synergistic activities and persistence of specific microbial populations in complex environments, such as the rhizosphere, should be addressed in order to obtain efficient inoculants. In this regards, research efforts are made at Agricultural College and Research Institute, Madurai to obtain appropriate formulations of microbial inoculants incorporating nitrogen fixing, phosphate- and silicate- solubilizing bacteria and plant growth promoting rhizobacteria which will help in promoting the use of such beneficial bacteria in sustainable agriculture.

Liquid Biofertilizers

Biofertilizers are such as Rhizobium, Azospirillum and Phosphobacteria  provide nitrogen  and phosphorous nutrients to crop plants through nitrogen fixation and phosphorous solubilization processes. These Biofertilizers could be effectively utilized for rice, pulses, millets, cotton, sugarcane, vegetable and other horticulture crops.

Biofertilizers  is one of the prime input in organic farming not only enhances the crop growth and yield but also improves the soil health and sustain soil fertility.

At  present, Biofertilizers  are supplied to the farmers as carrier based  inoculants. As an alternative, liquid formulation technology has been developed in the Department of Agricultural  Microbiology, TNAU, Coimbatore which has more advantages than the carrier inoculants.

Benefits

The advantages of Liquid Bio-fertilizer over conventional carrier based Bio-fertilizers are listed below:

  • Longer shelf life -12-24 months.
  • No contamination.
  • No loss of properties due to storage upto 45º c.
  • Greater potentials to fight with native population.
  • High populations can be maintained more than 109 cells/ml upto 12 months to 24 months.
  • Easy identification by typical fermented smell.
  • Cost saving on carrier material, pulverization, neutralization, sterilization, packing and transport.
  • Quality control protocols are easy and quick.
  • Better survival on seeds and soil.
  • No need of running Bio-fertilizer production units through out the year.
  • Very much easy to use by the farmer.
  • Dosages is 10 time less than carrier based powder Bio-fertilizers.
  • High commercial revenues.
  • High export potential.
  • Very high enzymatic activity since contamination is nil.

Characteritistics of different liquid Bio-fertilizers

Rhizobium

This belongs to bacterial group and the classical example is symbiotic nitrogen fixation. The bacteria infect the legume root and form root nodules within which they reduce molecular nitrogen to ammonia which is reality utilized by the plant to produce valuable proteins, vitamins and other nitrogen containing compounds. The site of  symbiosis is within the root nodules. It has been estimated that 40-250 kg N / ha / year is fixed by different legume crops by the microbial activities of Rhizobium. The percentage of nodules occupied, nodules dry weight, plant dry weight and the grain yield per plant the multistrain inoculant was highly promising Table-2 shows the N fixation rates.

Quantity of biological N fixed by Liqiud Rhizobium in different crops

Host Group Rhizobium Species Crops N fix kg/ha
Pea group Rhizobium leguminosarum Green pea, Lentil 62- 132
Soybean group R.japonicum Soybean 57-  105
Lupini Group R. lupine orinthopus Lupinus 70- 90
Alfafa grp.Group R.mellilotiMedicago Trigonella Melilotus 100- 150
Beans group R. phaseoli Phaseoli 80- 110
Clover group R. trifoli Trifolium 130
Cowpea group R. species Moong, Redgram, Cowpea, Groundnut 57- 105
Cicer group R. species Bengal gram 75- 117

Physical features of liquid Rhizobium

  • Dull white in colour
  • No bad smell
  • No foam formation, pH 6.8-7.5

Azospirllium

It belongs to bacteria and is known to fix the considerable quantity of nitrogen in the range of 20- 40 kg N/ha in the rhizosphere in non- non-leguminous plants such as cereals, millets, Oilseeds, cotton etc. The efficiency of Azospirillium as  a  Bio-Fertilizer has increased because of its ability of inducing abundant roots in several pants like rice, millets and oilseeds even in upland conditions. Considerable quantity of nitrogen fertilizer up to 25-30 % can be saved by the use of Azospirilluminoculant. The genus Azospirillum has three species viz., A. lipoferum, A. brasilense and A. amazonense. These species have been commercially exploited for the use as nitrogen supplying Bio-Fertilizers.

One of the characteristics of Azospirillum is its ability to reduce nitrate and denitrify. Both A. lipoferum,and A. brasilense may comprise of strains which can actively or weakly denitrify or reduce nitrate to nitrite and therefore, for inoculation preparation, it is necessary to select strains which do not possess these characteristics.Azospirllium lipoferum present in the roots of some of tropical forage grasses uch as Digitaria, Panicum, Brachiaria, Maize, Sorghum, Wheat and Rye.

Physical  features of liquid Azospirillum

  • The colour of the liquid may be blue or dull white.
  • Bad odours confirms improper liquid formulation and may be concluded as mere broth.
  • Production of yellow gummy colour materials comfirms the quality product.
  • Acidic pH always confirms that there is no Azospirillum bacteria in the liquid.

N2 fixing capacity of Azospirillum in the roots of several plants and the amount of N2 fixed by them.

Plant Mg N2  fixed /g of substrate
Oryza sativa (Paddy) 28
Sorghum bicolour (Sorghum) 20
Zea mays (Maize) 20
Panicum sp. 24
Cynodon dactylon 36
Setaria sp 12
Amaranthus spinosa 16

Production of growth hormones

Azospirillum cultures synthesize considerable amount of biologically active substances like vitamins, nicotinic acid, indole acetic acids giberllins. All these hormones/chemicals helps the plants in better germination, early emergence, better root development.

Role of Liquid Azospirillum under field conditions

  • Stimulates growth and imparts green colour which is a characteristic of a healthy plant.
  • Aids utilization of potash, phosphorous and other nutrients.
  • Encourage plumpness and succulence of fruits and increase protein percentage.

Sign of non functioning of Azospirillum in the field

  • No growth promotion activity
  • Yellowish green colour of leaves, which indicates no fixation of Nitrogen

Azotobacter

It is the important and well known free living nitrogen fixing aerobic bacterium. It is used as a Bio-Fertilizer for all non leguminous plants especially rice, cotton, vegetables etc. Azotobacter cells are not present on the rhizosplane but are abundant in the rhizosphere region. The lack of organic matter in the soil is a limiting factor for the proliferation of Azotobaceter in the soil.

Field experiments were conducted in 1992, 1993 and 1994 during the pre-kharif wet seasons to find out the influence on rice grain yield by the combined use of N- fixing organisms and inorganic nitrogen fertilizer which recorded increase in was yield.

Physical features of liquid Azotobacter

The pigmentation that is produced by Azotobacter in aged culture is melanin which is due to oxidation of tyrosine by tyrosinase an enzyme which has copper. The colour can be noted in liquid forms. Some of the pigmentation are described below-

  • A. chroococcum: Produces brown-black pigmentation in liquid inoculum.
  • A. beijerinchii: Produces yellow- light brown pigementation in liquid inoculum
  • A. vinelandii: Produces green fluorescent pigmentation in liquid inoculum.
  • A. paspali: Produces green fluorescent pigmentation in liquid inoculum.
  • A. macrocytogenes: Produces, pink pigmentation in liquid inoculum.
  • A. insignis: Produces less, gum less, grayish-blue pigmentation in liquid inoculum.
  • A. agilies: Produces green-fluorescent pigmentation in liquid inoculum.

Role of liquid Azotobacter in tissue culture

The study was conducted by Dr. Senthil et al (2004) on sugarcane variety CO 86032 in Tissue culture Laboratories of Rajashree Sugars and Chemicals Ltd, Varadaraj nagar, Theni, Tamilnadu. The liquid bioinoculants were provided by Dr. Krishnan Chandra, Regional Director, RCOF, Bangalore to evaluate their growth promoting effects on sugarcane micropropagation. He recorded Biometric observations like Plant height, leaf length, width, root length, no of roots. Chemical parameters –Protein, Carbohydrates, N, P,K total biomass and concluded as follows:

  • The performance of Azotobacter liquid inoculant was c
  • omparatively better than all the treatments in 10 % MS medium followed Azospirillum.
  • The performance of Azotobacter liquid inoculant was comparatively better than all the treatments followed by Azosopirillum for the growth of the polybag sugarcane seedlings.

Role of liquid Azotobacter as a Bio-control agent

Azotobacter have been found to produce some antifungal substance which inhibits the growth of some soil fungi like Aspergillus, Fusarium, Curvularia, Alternaria, Helminthosporium, Fusarium etc.

Acetobaceter

This is a sacharophillic bacteria and associate with sugarcane, sweet potato and sweet sorghum plants and fixes 30 kgs/ N/ ha year. Mainly this bacterium is commercialized for sugarcane crop. It is known  to increase yield by 10-20 t/ acre and sugar content by about  10-15 percent.

Effect of liquid Acetobacter diazotrophicus on sugarcane

In South India use of Azospirillum and Phospho-bacterium on the cash crop sugarcane is a regular practice for the past few years with a saving of nearly 20 % of chemical nitrogen and phosphate applications. Now, it has been reported that a bacteria Acetobacter diazotrophicus  which is present in the sugarcane stem, leaves, soils have a capacity to fix up to 300 kgs of nitrogen. This bacteria first reported in brazil where the farmers cultivate sugarcane in very poor sub-soil fertilized with Phosphate, Potassium and micro elements alone, could produce yield for three consecutive harvests, without any nitrogen fertilizer. They have recorded yield 182- 244 tones per ha. This leads to the assumption that active nitrogen fixing bacteria has associated within the plant.

Do’s and Don’t for Entrepreneurs, Dealers and farmers

Do

Don’t

Keep Bio-fertilizers bottles away from direct heat and sunlight. Store it in cool and dry place. Don’t store Bio-fertilizers bottles under heat and sunlight
Sell only Bio-fertilizers bottles which contain batch number, the name of the crop on which it has to be used, the date of manufacture and expiry period. Don’t sell Bio-fertilizers bottles after their expiry period is over.
If the expiry period is over, then discard it as it is not effective. Don’t prick holes into the bottles or puncture them to pour the content
Keep Bio-fertilizers bottles away from fertilizer or pesticide containers and they should not be mixed directly. Do not mix the Bio-fertilizers with fungicides, insecticides, herbicides, herbicides and chemical fertilizers.

Liquid  Bio-fertlizer application methodology
There are three ways of using Liquid Bio-fertilizers

  1. Seed treatment
  2. Root dipping
  3. Soil application

Seed Treatment

Seed Treatment is a most common method adopted for all types of inoculants. The seed treatment is effective and economic. For small quantity of seeds (up to 5 kgs quantity) the coating can done in a plastic bag. For this purpose, a plastic bag having size (21” x 10”) or big size can be used. The bag should be filled with 2 kg or more of seeds. The bag should be closed in such a way to trap the airs as much as possible. The bag should be squeezed for 2 minutes or more until all the seed are uniformly wetted. Then bag is opened, inflated again and shaked gently. Stop shaking after each seeds gets a uniform layer of culture coating. The bag is opened and the seed is dried under the shade for 20-30 minutes. For large amount of seeds coating can be done in a bucket and inoculant can be mixed directly with hand. Seed Treatment with Rhizobium, Azotobacter, Azospirillum, along with PSM can be done.

The seed treatment can be done with any of two or more bacteria. There is no side (antagonistic) effect. The important things that has to be kept in mind are that the seeds must be coated first with Rhizobium, Azotobacter or Azospirillum. When each seed get a layer of above bacteria then PSM inoculant has to be coated as outer layer. This method will provide maximum number of each bacteria required for better results. Treatments of seed with any two bacteria will not provide maximum number of bacteria on individual seed.

Root dipping

For application of Azospirillum/ /PSM on paddy transplating/ vegetable crops this method is used. The required quantity of Azospirillum/ /PSM has to be mixed with 5-10 litres of water at one corner of the field and the roots of seedlings has to be dipped for a minimum of half-an-hour before transplantation.

Soil application

Use 200ml of PSM per acre. Mix PSM with 400 to 600 kgs of Cow dung FYM along with ½ bag of rock phosphate if available. The mixture of PSM, cow dung  and rock phosphate have to be kept under any tree or under shade for over night and maintain 50% moisture. Use the mixture as soil application in rows or during leveling of soil.

Dosage of liquid Bio-fertilizers in different crops

Recommended Liquid Bio-fertilizers and its application method, quantity to be used for different crops are as follows:

Crop

Recommended Bio-fertilizer

Application method

Quantity to be used

Field crops
Pulses
Chickpea, pea, Groundnut, soybean, beans, Lentil, lucern, Berseem, Green gram, Black gram, Cowpea and pigeon pea
Rhizobium Seed treatment 200ml/acre
Cereals
Wheat, oat, barley
Azotobacter/Azospirillum Seed treatment 200ml/acre
Rice Azospirillum Seed treatment 200ml/acre
Oil seeds
Mustard, seasum, Linseeds, Sunflower, castor
Azotobacter Seed treatment 200ml/acre
Millets
Pearl millets, Finger millets, kodo millet
Azotobacter Seed treatment 200ml/acre
Maize and Sorghum Azospirillum Seed treatment 200ml/acre
Forage crops and Grasses
Bermuda grass, Sudan grass, Napier Grass , ParaGrass, StarGrass etc.
Azotobacter Seed treatment 200ml/acre
Other Misc. Plantation Crops
Tobacco
Azotobacter Seedling treatment 500ml/acre
Tea, Coffee Azotobacter Soil treatment 400ml/acre
Rubber, Coconuts Azotobacter Soil treatment 2-3 ml/plant
Agro-ForestRY/Fruit Plants
All fruit/agro-forestry (herb,shrubs, annuals and perennial) plants for fuel wood fodder, fruits,gum,spice,leaves,flowers,nuts and seeds puppose
Azotobacter Soil treatment 2-3 ml/plant at nursery
Leguminous plants/ trees Rhizobium Soil treatment 1-2 ml/plant

Note:

Doses recommended when count of inoculum is 1 x 108 cells/ml then doses will be ten times more besides above said Nitrogen fixers, Phosphate solubilizers and potash mobilizers at the rate of 200 ml/ acre could be applied for all crops.

Equipments required for Biofertilizer production

In biofertilizer production industry, equipments are the major infrastructure, which involves 70 percent of capital investment. Any compromise on the usage  of the following mentioned equipments may finally decline in the quality of biofertilizer.After studying the principle behind the usage of all instruments, some of the instruments can be replaced with a culture room fitted with a U.V.Lamp. Autoclaves, Hot Air Oven, Incubators and sealing machines are indigenously made with proper technical specifications. The correct use of equipments will give uninterrupted introduction with quality inoculum.

Essential equipments

Autoclave

It is an apparatus in which materials are sterilized by air free saturated steam (under pressure) at a temperature above 100OC. If the steam pressure inside the autoclave is increased to 15 psi,  the temperature will rise to 121°C. this is sufficient to destroy all vegetative cells. Normally all growth medium are sterilized in the autoclave.

Laminar air flow chamber

Laminar air flow chamber provides a uniform flow of filtered air. This continuous flow of air will prevent settling of particles in the work area.Air borne contamination is avoided in this chamber. Culture transfers and inoculation can be done here.

BOD incubators 

Incubators providing controlled conditions (light, temperature, humidity, etc.) required for the growth and development of microorganisms. Multiplication of starter culture can be done in this instrument.

Rotary shaker

It is used for agitating culture flasks by circular motion under variable speed control. Shaking provides aeration for growth of cultures. Shakers holding upto 20-50 flasks are generally used. The capacity of the shaker may be increased if it is a double- decker type.

Hot air oven

Hot air oven is meant for sterilizing all glassware materials. Dry heat is used in this apparatus to sterilize the materials. Normally 180OC is used for two hours for sterilizing glasswares.

pH meter

An instrument for measuring pH of the solution using a 0-14 scale in which seven represents neutral points, less than seven is acidity (excess of H‘  over OH-) and more than seven is alkality  (excess of OH- over H‘  ) useful in adjusting the pH of the growth medium.

Refrigerator

This equipment is used preserving all mother cultures used for biofertilizer production. The mother culture is periodically sub-cultured and stored in the refrigerator for long- term usage.

Fermentor

A fermentor is the equipment, which provides the proper environment for the growth of a desired organism.

It is generally a large vessel in which, the organism may be kept at the required temperature, pH , dissolved oxygen concentration and substrate concentration. Different models of fermentors are available depending upon the necessity.

A simple version model contains steam generator, sterilization process devices and agitator.

A sophisticated fermentor contains pH regulator, oxygen level regulator, anti-foam device, temperature  controller, etc.


3. Mass production of   Bacterial Biofertilizer

Azospirillum Rhizobium Phosphobacteria Azotobacter

Biofertilizers are carrier based preparations containing efficient strain of nitrogen fixing or phosphate solubilizing microorganisms. Biofertilizers are formulated usually as carrier based inoculants. The organic carrier materials are more effective for the preparation of bacterial inoculants. The solid inoculants carry more number of bacterial cells and support the survival of cells for longer periods of time.

  • The mass production of carrier based bacterial biofertilizers involves three stages.
  • Culturing of microorganisms
  • Processing of carrier material
  • Mixing the carrier and the broth culture and packing

Culturing of Microorganisms

Although many bacteria can be used beneficially as a biofertilizer the technique of mass production is standardizedfor Rhizobium, Azospirillum, Azotobacter and phosphobacteria.
The media used for mass culturing are as follows:

Rhizobium : Yeast extract mannitol broth.

Growth on Congo red yeast extract mannitol agar medium

Mannitol 10.0 g
K2 HPO4 0.5 g
Mg So4 7H2 O 0.2 g
NaCl 0.1 g
Yeast extract 0.5 g
Agar 20.0 g
Distilled water 1000.0 ml

Add 10 ml of Congo red stock solution (dissolve 250 mg of Congo  red  in 100ml water) to 1 liter after adjusting the PH to 6.8 and before adding agar.

Rhizobium forms white, translucent, glistening, elevated and comparatively small colonies on this medium. Moreover, Rhizobium colonies do not take up the colour of congo red dye added in the medium. Those colonies which readily take up the congo red stain are not rhizobia but presumably Agrobacterium, a soil bacterium closely related to Rhizobium.

Azospirillum : Dobereiner’s malic acid broth with NH4Cl (1g per liter)

Composition of the N-free semisolid malic acid medium

Malic acid 5.0g
Potassium hydroxide 4.0g
Dipotassium hydrogen orthophosphate 0.5g
Magnesium sulphate 0.2g
Sodium chloride 0.1g
Calcium chloride 0.2g
Fe-EDTA (1.64% w/v aqueous) 4.0 ml
Trace element solution 2.0 ml
BTB (0.5% alcoholic solution) 2.0 ml
Agar 1.75 g
Distilled water 1000 ml
pH 6.8
Trace element solution
Sodium molybdate 200 mg
Manganous sulphate 235 mg
Boric acid 280 mg
Copper sulphate 8 mg
Zinc sulphate 24 mg
Distilled water 200 ml

Waksman medium No.77 (N-free Mannitol Agar Medium for Azotobacter)

Mannitol : 10.0 g
Ca CO3 : 5.0 g
K2HPO4 : 0.5 g
Mg SO4.7H2O : 0.2 g
NaCl : 0.2 g
Ferric chloride : Trace
MnSO4.4H2O : Trace
N-free washed Agar : 15.0 g
pH : 7.0
Distilled Water : 1000 ml

Phosphobacteria : Pikovskaya’s Broth

Glucose : 10.0 g
Ca3(PO4)2 : 5.0 g
(NH4)2SO4 : 0.5 g
KCl : 0.2 g
MgSO4. 7H2O : 0.1 g
MnSO4 : Trace
FeSO4 : Trace
Yeast Extract : 0.5 g
Distilled Water : 1000 ml

The broth is prepared in flasks and inoculum from mother culture is transferred to flasks. The culture is grown under shaking conditions at 30±2°C as submerged culture. The culture is incubated until maximum cell population of 1010 to 1011 cfu/ml is produced. Under optimum conditions this population level could be attained with in 4 to 5 days for Rhizobium; 5 to 7 days for Azospirillum; 2 to 3 days for phosphobacteria and 6-7 days for Azotobacter. The culture obtained in the flask is calledstarter culture. For large scale production of inoculant, inoculum from starter culture is transferred to large flasks/seed tank fermentor and grown until required level of cell count is reached.

Inoculum preparation

  • Prepare appropriate media for specific to the bacterial inoculant in 250 ml, 500 ml, 3 litre and 5 litre conical flasks and sterilize.
  • The media in 250 ml flask is inoculated with efficient bacterial  strain under aseptic condition
  • Keep the flask under room temperature in rotary shaker (200 rpm) for 5- 7 days.
  • Observe the flask for growth of the culture and estimate the population, which serves as the starter culture.
  • Using the starter culture (at log phase) inoculate the larger flasks (500 ml, 3 litre and 5 litre) containing the media, after obtaining growth in each flask.
  • The above media is prepared in large quantities in fermentor, sterilized well, cooled and kept it ready.
  • The media in the fermentor is inoculated with the log phase culture grown in 5 litre flask. Usually 1 -2 % inoculum is sufficient, however inoculation is done up to 5% depending on the growth of the culture in the larger flasks.
  • The cells are grown in fermentor by providing aeration (passing sterile air through compressor and sterilizing agents like glass wool, cotton wool, acid etc.) and given continuous stirring.
  • The broth is checked for the population of inoculated organism and contamination if any at the growth period.
  • The cells are harvested with the population load of 109 cells ml-1 after incubation period.
  • There should not be any fungal or any other bacterial contamination at 10-6 dilution level
  • It is not advisable to store the broth after fermentation for periods longer than 24 hours. Even at 4o C number of viable cells begins to decrease.

Processing of carrier material

The use of ideal carrier material is necessary in the production of good quality biofertilizer. Peat soil, lignite, vermiculite, charcoal, press mud, farmyard manure and soil mixture can be used as carrier materials. The neutralized peat soil/lignite are found to be better carrier materials for biofertilizer production The following points are to be considered in the selection of ideal carrier material.

  • Cheaper in cost
  • Should be locally available
  • High organic matter content
  • No toxic chemicals
  • Water holding capacity of more than 50%
  • Easy to process, friability and vulnerability.

Preparation of carrier material

  • The carrier material (peat or lignite) is powdered to a fine powder so as to pass through 212 micron IS sieve.
  • The pH of the carrier material is neutralized with the help of calcium carbonate (1:10 ratio) , since the peat soil / lignite  are acidic in nature (  pH of 4 – 5)
  • The neutralized carrier material is sterilized in an autoclave to eliminate the contaminants.

Mixing the carrier and the broth culture and packing

Inoculant packets are prepared by mixing the broth culture obtained from fermentor with sterile carrier material as described below:

Preparation of Inoculants packet

  • The neutralized, sterilized carrier material is spread in a clean, dry, sterile metallic or plastic tray.
  • The bacterial culture drawn from the fermentor is added to the sterilized carrier and mixed well by manual (by wearing sterile gloves) or by mechanical mixer. The culture suspension is to be added to a level of 40 – 50% water holding capacity depending upon the population.
  • The inoculant packet of 200 g quantities in polythene bags, sealed with electric sealer and allowed for curing for 2 -3 days at room temperature ( curing can be done by spreading the inoculant on a clean floor/polythene sheet/ by keeping in open shallow tubs/ trays with polythene covering for 2 -3 days at room temperature before packaging).

Schematic representation of mass production of bacterial biofertilizers

Specification of the polythene bags

  • The polythene bags should be of low density grade.
  • The thickness of the bag should be around 50 – 75 micron.
  • Each packet should be marked with the name of the manufacturer, name of the product, strain number, the crop to which recommended, method of inoculation, date of manufacture, batch number, date of expiry, price, full address of the manufacturer and storage instructions etc.,

Storage of biofertilizerpacket

  • The packet should be stored in a cool place away from the heat or direct sunlight.
  • The packets may be stored at room temperature or in cold storage conditions in lots in plastic crates or polythene / gunny bags.
  • The population of inoculant in the carrier inoculant packet may be determined at 15 days interval. There should be more than 109 cells / g of inoculant at the time of preparation and107 cells/ g on dry weight basis before expiry date.

Mass production of Mycorrhizal biofertilizer

The commercial utilization of mycorrhizal fungi has become difficult because of the obligate symbiotic nature and difficulty in culturing on laboratory media. Production of AM inoculum has evolved from the original use of infested field soils to the current practice of using pot culture inoculum derived from the surface disinfected spores of single AM fungus on a host plant grown in sterilized culture medium. Several researches in different parts of the world resulted in different methods of production of AM fungal inoculum as soil based culture as well as carrier based inoculum. Root organ culture and nutrient film technique provide scope for the production of soil less culture.

As a carrier based inoculum, pot culture is widely adopted method for production. The AM inoculum was prepared by using sterilized soil and wide array of host crops were used as host. The sterilization process is a cumbersome one and scientists started using inert materials for production of AM fungi. The researchers tried use of perlite, montmorillonite clay etc., In TNAU vermiculite was tried as substrate for the replacement of soil sterilization, which resulted in the best method of inoculum production.

Method of production

1. Tank for mass multiplication of AM    2. Sprinkling of water in tank with vermiculite     3. Making of furrows to sow maize seeds
4. Sowing the seeds in furrows 5. View of the maize sown AM pit 6. Vermiculite contained raised AM infected maize plants
  • A trench (1m x 1m x 0.3m) is formed and lined with black polythene sheet to be used as a plant growth tub.
  • Mixed 50 kg of vermiculite and 5 kg of sterilized soil and packed in the trench up to a height of 20 cm
  • Spread 1 kg of AM inoculum (mother culture) 2-5 cm below the surface of vermiculite
  • Maize seeds surface sterilized with 5% sodium hypochlorite for 2 minutes are sown
  • Applied 2 g urea, 2 g super phosphate and 1 g muriate of potash for each trench at the time of sowing seeds.  Further 10 g of urea is applied twice on 30 and 45 days after sowing for each trench
  • Quality test on AM colonization in root samples is carried out on 30th and 45th day
  • Stock plants are grown for 60 days (8 weeks).  The inoculum is obtained by cutting all the roots of stock plants.  The inoculum produced consists of a mixture of vermiculite, spores, pieces of hyphae and infected root pieces.
  • Thus within 60 days 55 kg of AM inoculum could be produced from 1 sq meter area.  This inoculum will be sufficient to treat 550 m2 nursery area having 11,000 seedlings.

AM fungi

Nursery application: 100 g bulk inoculum is sufficient for one metre square. The inoculum should be applied at 2-3 cm below the soil at the time of sowing. The seeds/cutting should be sown/planted above the VAM inoculum to cause infection.

For polythene bag raised crops: 5 to 10 g bulk inoculum is sufficient for each packet. Mix 10 kg of inoculum with 1000 kg of sand potting mixture and pack the potting mixture in polythene bag before sowing.

For out –planting: Twenty grams of VAM inoculum is required per seedling. Apply inoculum at the time of planting.
For existing trees: Two hundred gram of VAM inoculum is required for inoculating one tree. Apply inoculum near the root surface at the time of fertilizer application.

Mass production and field application of cyanobacteria

Blue green algal inoculation with composite cultures was found to be more effective than single culture inoculation. A technology for mass scale production of composite culture of blue green algae under rice field condition was developed at TNAU and the soil based BGA inoculum could survive for more than 2 years.

At many sites where algal inoculation was used for three to four consecutive cropping seasons, the inoculated algae establish well and the effect persisted over subsequent rice crop. Technologies for utilizing nitrogen fixing organisms in low land rice were the beneficial role of blue green algal inoculation in rice soils of Tamil Nadu.

The blue green algal inoculum may be produced by several methods viz., in tubs, galvanized trays, small pits and also in field conditions.  However the large-scale production is advisable under field condition which is easily adopted by farmers.

I. Multiplication in trays

  • Big metallic trays (6’x 3’x 6”lbh) can be used for small scale production
  • Take 10 kg of paddy field soil, dry powder well and spread
  • Fill water to a height of 3”
  • Add 250 g of dried algal flakes (soil based) as inoculum
  • Add 150 g of super phosphate and 30 g of lime and mix well with the soil
  • Sprinkle 25 g carbofuran to control the insects
  • Maintain water level in trays
  • After 10 to 15 days, the blooms of BGA will start floating on the water sources
  • At this stage stop watering and drain. Let the soil to dry completely
  • Collect the dry soil based inoculum as flakes
  • Store in a dry place.  By this method 5 to 7 kg of soil based inoculum can be obtained.

II. Multiplication under field condition

Materials

  • Rice field
  • Super phosphate
  • Carbofuran
  • Composite BGA starter culture

Procedure

Select an area of 40 m2 (20m x 2m) near a water source which is directly exposed to sunlight.
Make a bund all around the plot to a height of 15 cm and give it a coating with mud to prevent loss of water due to percolation.

  • Plot is well prepared and levelled uniformly and water is allowed to a depth of 5-7.5 cm and left to settle for 12 hrs.
  • Apply 2 kg of super phosphate and 200 g lime to each plot uniformly over the area.
  • The soil based composite starter culture of BGA containing 8-10 species @ 5 kg / plot is powdered well and broadcasted.
  • Carbofuran @ 200 g is also applied to control soil insects occurring in BGA.
  • Water is let in at periodic intervals so that the height of water level is always maintained at 5 cm.
  • After 15 days of inoculation, the plots are allowed to dry up in the sun and the algal flakes are collected and stored.

Observations

The floating algal flasks are green or blue green in colour.  From each harvest, 30 to 40 kg of dry algal flakes are obtained from the plot.

Method of inoculation of BGA in rice field

Blue green algae may be applied as soil based inoculum to the rice field following the method described below.

  • Powder the soil based algal flakes very well.
  • Mix it with 10 kg soil or sand (10kg powdered algal flakes with 10 kg soil / sand).
  • BGA is to be inoculated on 7-10 days after rice transplanting.
  • Water level at 3-4” is to be maintained at the time of BGA inoculation and then for a month so as to have maximum BGA development.

Observation

A week after BGA inoculation, algal growth can be seen and algal mat will float on the water after 2-3 weeks. The algal mat colour will be green or brown or yellowish green.

Mass production and field application of Azol

Azolla is a free-floating water fern that floats in water and fixes atmospheric nitrogen in association with nitrogen fixing blue green alga Anabaena azollae. Azolla fronds consist of sporophyte with a floating rhizome and small overlapping bi-lobed leaves and roots. Rice growing areas in South East Asia and other third World countries have recently been evincing increased interest in the use of the symbiotic N2 fixing water fern Azolla either as an alternate nitrogen sources or as a supplement to commercial nitrogen fertilizers.  Azolla is used as biofertilizer for wetland rice and it is known to contribute 40-60 kg N ha-1 per rice crop. The agronomic potential ofAzolla is quite significant particularly for rice crop and it is widely used as biofertilizer for increasing rice yields. Rice crop response studies with Azolla biofertilizer in the People’s Republic in China and in Vietnam have provided good evidence that Azolla incorporation into the soil as a green manure crop is one of the most effective ways of providing nitrogen source for rice.

The utilization of Azolla as dual crop with wetland rice is gaining importance in Philippines, Thailand, Srilanka and India. The important factor in using Azolla as a biofertilizer for rice crop is its quick decomposition in soil and efficient availability of its nitrogen to rice. In tropical rice soils the applied Azolla mineralizes rapidly and its nitrogen is available to the rice crop in very short period. The common species of Azolla are A. microphylla, A. filiculoides, A. pinnata, A. caroliniana, A. nilotica, A. rubra and A. mexicana.

I.  Mass multiplication of Azolla under field conditions

A simple Azolla nursery method for large scale multiplication of Azolla in the field has been evolved for easy adoption by the farmers.

Materials

  • One cent (40 sq.m) area plot
  • Cattle dung
  • Super phosphate
  • Furadan
  • Fresh Azolla inoculum

Procedure

  • Select a wetland field and prepare thoroughly and level uniformly.
  • Mark the field into one cent plots (20 x 2m) by providing suitable bunds and irrigation channels.
  • Maintain water level to a height of 10 cm.
  • Mix 10 kg of cattle dung in 20 litres of water and sprinkle in the field.
  • Apply 100 g super phosphate as basal dose.
  • Inoculate fresh Azolla biomass @ 8 kg to each pot.
  • Apply super phosphate @ 100 g as top dressing fertilizer on 4th and 8th day after Azolla inoculation.
  • Apply carbofuran (furadan) granules @ 100 g/plot on 7th day after Azolla inoculation.
  • Maintain the water level at 10 cm height throughout the growth period of two or three weeks.
  • Observations
  • Note the Azolla mat floating on the plot. Harvest the Azolla, drain the water and record the biomass.

II.  Method of  inoculation of Azolla to rice crop

The Azolla biofertilizer may be applied in two ways for the wetland paddy.  In the first method, fresh Azolla biomass is inoculated in the paddy field before transplanting and incorporated as green manure.  This method requires huge quantity of fresh Azolla. In the other method, Azolla may be inoculated after transplanting rice and grown as dual culture with rice and incorporated  subsequently.

A.  Azolla  biomass incorporation as green manure for rice crop

  • Collect the fresh Azolla biomass from the Azolla nursery plot.
  • Prepare the wetland well and maintain water just enough  for easy incorporation.
  • Apply fresh Azolla biomass (15 t ha-1) to the main field and incorporate the Azolla by using implements or tractor.

B.  Azolla inoculation as dual crop for rice

  • Select a transplanted rice field.
  • Collect fresh Azolla  inoculum from Azolla nursery.
  • Broadcast the fresh Azolla in the transplanted rice field on 7th day after planting (500 kg / ha).
  • Maintain water level at 5-7.5cm.
  • Note the growth of Azolla mat four weeks after transplanting and incorporate the  Azolla biomass by using implements or tranctor or during inter-cultivation practices.
  • A second bloom of Azolla will develop 8 weeks after transplanting which may be incorporated again.
  • By the two incorporations, 20-25 tonnes of Azolla can be incorporated in one hectare rice field.

4. Application of Biofertilizers

1. Seed treatment or seed inoculation
2. Seedling root dip
3. Main field application

Seed treatment

One packet of the inoculant is mixed with 200 ml of rice kanji to make a slurry. The seeds required for an acre are mixed in the slurry so as to have a uniform coating of the inoculant over the seeds and then shade dried for 30 minutes. The shade dried seeds should be sown within 24 hours. One packet of the inoculant (200 g) is sufficient to treat 10 kg of seeds.

Seedling root dip

This method is used for transplanted crops. Two packets of the inoculant is mixed in 40 litres of water. The root portion of the seedlings required for an acre is dipped in the mixture for 5 to 10 minutes and then transplanted.

Main field application

Four packets of the inoculant is mixed with 20 kgs of dried and powdered farm yard manure and then broadcasted in one acre of main field just before transplanting.

Rhizobium

For all legumes Rhizobium is applied as seed inoculant.

Azospirillum/Azotobacter

In the transplanted crops, Azospirillum is inoculated through seed, seedling root dip and soil application methods. For direct sown crops, Azospirillum is applied through seed treatment and soil application.

Phosphobacteria

Inoculated through seed, seedling root dip and soil application methods as in the case of Azospirillum.
Combined application of bacterial biofertilizers.

Phosphobacteria can be mixed with Azospirillum and Rhizobium. The inoculants should be mixed in equal quantities and applied as mentioned above.

Points to remember

  • Bacterial inoculants should not be mixed with insecticide, fungicide, herbicide and fertilizers.
  • Seed treatment with bacterial inoculant is to be done at last when seeds are treated with fungicides.

Biofertilizers recommendation (one packet – 200 g)

Crop Seed Nursery Seedling dip Main field Total requirement of packets per ha
Rice 5 10 5 10 30
Sorghum 3 10 13
Pearl millet 3 10 13
Ragi 3 5 10 18
Maize 3 10 13
Cotton 3 10 13
Sunflower 3 10 13
Castor 3 10 13
Sugarcane 10 36
(3 split)
46
Turmeric 24
(2 split)
24
Tobacco 1 3 10 g/pit 14
Papaya 2 10
Mandarin
Orange
2 10 g/pit
Tomato 1 10 14
Banana 5 10 g/pit

Rhizobium (only seed application is recommended)

Crop Total requirement of packets per ha
Soybean 5
Groundnut 5
Bengalgram 5
Blackgram 3
Greengram 3
Redgram 3
Cowpea 3

Phosphobacteria
The recommended dosage of Azospirillum is adopted for phosphobacteria inoculation; for combined inoculation, both biofertilizers as per recommendations are to be mixed uniformly before using.


5. Azolla – The best feed for cattle and poultry

Azolla is a free floating water fern that floats in water and fixes nitrogen in association with the nitrogen fixing blue green algae, Anabaena azollae. Azolla is considered to be a potential biofertilizer in terms of nitrogen contribution to rice. Long before its cultivation as a green manure, Azolla has been used as a fodder for domesticated animals such as pigs and ducks. In recent days, Azolla is very much used as a sustainable feed substitute for livestock especially dairy cattle, poultry, piggery and fish.

Azolla contains 25 – 35 per cent protein on dry weight basis and rich in essential amino acids, minerals, vitamins and carotenoids including the antioxidant b carotene. Cholorophyll a, chlorophyll b and carotenoids are also present in Azolla, while the cyanobiont Anabaena azollae contains cholorophyll a, phycobiliproteins and carotenoids. The rare combination of high nutritive value and rapid biomass production make Azolla a potential and effective feed substitute for live stocks.

Inputs required

Azolla fronds, Polythene sheet, Super phosphate and Cow dung.

Methodology

The area selected for Azolla nursery should be partially shaded. The convenient size for Azolla is 10 feet length, 2 feet breadth and 1 feet depth. The nursery plot is spread with a polythene sheet at the bottom to prevent water loss. Soil is applied to a depth of 2 cm and a gram of super phosphate is applied along with 2 kg of vermicompost or cow dung in the nursery for quick growth. Azolla mother inoculum is introduced @ 5 kg/plot.

The contents in the plot are stirred daily so that the nutrients in the soil dissolve in water for easy uptake by Azolla. Azolla is harvested fifteen days after inoculation at the rate of 50-80 kg / plot. One third of Azolla should be left in the plot for further multiplication. Five kg cow dung slurry should be sprinkled in the Azolla nursery at  ten days intervals. Neem oil can be sprayed over the Azolla at 0.5 5 level to avoid pest incidence.

Animal Dosage / day
Adult cow , Buffalo, Bullock 1.5-2  kg
Layer, Broiler birds 20 – 30 grams
Goat 300 – 500 grams
Pig 1.5 – 2.0 kg
Rabbit 100 gram

Value of the technology

The egg yield is increased in layer birds due to Azolla feeding. The Azolla fed birds register an overall egg productivity of 89.0 per cent as against 83.7 per cent recorded by the birds fed with only concentrated feed. The average daily intake  of concentrated feed is considerably low (106.0 g) for birds due to Azolla substitution as against 122.0 g in the control birds. More impotantly Azolla feeding shows considerable amount of savings in the consumption of concentrated feed (13.0 %) leading to reduced operational cost. By considering the average cost of the concentrated feed  as Rs. 17/ Kg, a 13.0 % saving in the consumption ultimately leads to a feed cost savings of 10.0 paise /day/ bird and hence a layer unit maintaining 10,000 birds could cut down its expense towards feed to a tune of rs.1000/day.

Benefits
The Azolla feeding to layer birds increase egg weight, albumin, globulin and carotene contents. The total protein content of the eggs laid by the Azolla fed birds is high and the total carotene content of Azolla eggs(440 g 100 g-1 of edible portion)is also higher than the control. The rapid biomass production due to the high relative growth rate, increased protein and carotene contents and good digestability of the Azolla hybrid Rong ping favour its use as an effective feed supplement to poultry birds.

Effect of Azolla hybrid Rong Ping on the nutritional value of egg

Parameters Azolla egg Control percentage increase over control
Egg weight (g) 61.20 57.40 6.62
Albumin (g /100 g of edible portion) 3.9 3.4 14.70
Globulin (g /100 g of edible portion) 10.1 9.5 6.31
Total protein (g/ 100 g of edible portion) 14.0 12.9 8.52
Carotenes (µg / 100 g of edible portion) 440 405 8.64

Application

In Indian conditions, agriculture is very much coupled with poultry farming. Azolla is an important low cost input, which plays a vital role in improving soil quantity in sustainable rice farming. The twin potentials as biofertilizer and animal feed make the water fern Azolla as an effective input to both the vital components of integrated farming, agricultural and animalo husbandry.

Limitation

Azolla is a water fern and requires a growth temperature of 35-38º C. The multiplication of Azolla is affected under elevated temperature. Hence adopting this technology in dry zones where the temperature exceeds 40ºc is difficult.

Achievements

Azolla hybrid Rong ping had been selected to supply to the tribal population. Azolla mother inoculum nursery was  laid out in villages with the help of Krishi Vigyan Kendra, TNAU, Coimbatore and Krishi Vigyan Kendra, Karamadai,  women entrepreneurs were selected and one day training  was imparted to them  on the cultivation of Azolla. Wet biomass (Starter inoculm) were supplied at free of cost @ 10 kg/women entrepreneur during the training so as to enable them to initiate commercial Azolla cultivation in their backyards.

Azolla multiplication plots had been laid out in Narasipuram. Azolla mass production training was conducted to the SHG in Narasipuram village with the help of Kalaimagal Arts and Science College, Narasipuram, Sappanimadai (tribal village) and Avinashilingam KVK, Karamadai. With the help of Avinashilingam KVK, Karamadai Azolla trainings were conducted to women volunteers and we have established Azolla village in Karamadai. The Avin milk producers union Coimbatore and the poultry owners association,  Namakkal have been contacted and explained the importance of Azolla as feed supplement.

The Milk Producers Union also involved in the training and marketing of Azolla. They are purchasing Azolla fronds from the village level Azolla growers both under wet and dry conditions. Around 400 rural women and 370 tribal people have been trained on the cultivation of Azolla through this project. The Azolla laboratory and the Azolla germplasm center at AC& RI, TNAU, Coimbatore helped us in the maintenance of germplasm by providing the mother inoculum. The Animal Husbandry Unit at AC&RI, TNAU, Coimbatore  helped us in standardizing the Azolla and concentrated feed mixing ratio.

Azolla mass multiplication in pits Feeding Azolla to Rabbit
Feeding Azolla to Poultry Feeding Azolla to Livestock
Inoculating Super phosphate and Cow dung in Azolla pit

EE-Unit-V Biofuels

A biofuel is defined as any fuel whose energy is obtained through a process of biological carbon fixation. That definition serves to make our understanding of biofuels as clear as mud, so let’s unpack it a bit.

Biological Carbon Fixation

Carbon fixation is a process that takes inorganic carbon (in the form of things like CO2) and converts it into organic compounds. In other words, any process that converts carbon dioxide into a molecule that would be found in a living organism is carbon fixation. If this process occurs in a living organism, it is referred to as ‘biological carbon fixation’.

Fuel

The next part of the definition of a biofuel involves fuel. A fuel is nothing more than something from which we humans can get energy. Carbon fixation can lead to a number of different compounds, like proteins, fats, and alcohols (just to name a few). If any of those molecules can be used to provide energy in a mechanical setting, we call it a fuel.

The Real Definition of a Biofuel and the Practical Definition

A biofuel is a hydrocarbon that is made BY or FROM a living organism that we humans can use to power something. This definition of a biofuel is rather formal. In practical consideration, any hydrocarbon fuel that is produced from organic matter (living or once living material) in a short period of time (days, weeks, or even months) is considered a biofuel. This contrasts with fossil fuels, which take millions of years to form and with other types of fuel which are not based on hydrocarbons (nuclear fission, for instance).

What makes biofuels tricky to understand is that they need not be made by a living organism, though they can be. Biofuels can also be made through chemical reactions, carried out in a laboratory or industrial setting, that use organic matter (called biomass) to make fuel. The only real requirements for a biofuel are that the starting material must be CO2 that was fixed (turned into another molecule) by a living organism and the final fuel product must be produced quickly and not over millions of years.

Biomass

Biomass is simply organic matter. In others words, it is dead material that was once living. Kernels of corn, mats of algae, and stalks of sugar cane are all biomass. Before global warming related to burning fossil fuels became a major factor in determining where energy came from, the major concern was that fossil fuels, which are considered limited in supply, would run out over the next century. It was thought that if we could produce hydrocarbons another way, and quickly, then we could meet our energy demands without much problem. This leads to one of the major separating factors between a biofuel and a fossil fuel – renewability.

A fossil fuel is not considered renewable because it takes millions of years to form and humans really can’t wait that long. Biofuel, on the other hand, comes from biomass, which can be produced year after year through sustainable farming practices. This means biomass and biofuel are renewable (we can replace used biofuel over a very short period of time).

It is important to note that ‘renewable’ energy is not the same thing as ‘green’ energy. Renewable energy simply won’t run out any time soon, like biofuels, hydroelectric, wind, and solar. A “green” energy is one that is also good for the planet because it does not harm ecosystems, contribute to acid rain, or worsen global warming. Solar energy is a ‘green’ energy. All ‘green’ energy is considered renewable, but not all renewable energy is green. Biofuels are examples of renewable energy sources that aren’t always green because they produce greenhouse gases.

Types of Biofuels

The chemical structure of biofuels can differ in the same way that the chemical structure of fossil fuels can differ. For the most part, our interest is in liquid biofuels as they are easy to transport. The table below compares various biofuels with their fossil fuel counterparts.

Biofuel Fossil Fuel Differences
Ethanol Gasoline/Ethane Ethanol has about half the energy per mass of gasoline, which means it takes twice as much ethanol to get the same energy. Ethanol burns cleaner than gasoline, however, producing less carbon monoxide. However, ethanol produces more ozone than gasoline and contributes substantially to smog. Engines must be modified to run on ethanol.
Biodiesel Diesel Has only slightly less energy than regular diesel. It is more corrosive to engine parts than standard diesel, which means engines have to be designed to take biodiesel. It burns cleaner than diesel, producing less particulate and fewer sulfur compounds.
Methanol Methane Methanol has about one third to one half as much energy as methane. Methanol is a liquid and easy to transport whereas methane is a gas that must be compressed for transportation.
Biobutanol Gasoline/Butane Biobutanol has slightly less energy than gasoline, but can run in any car that uses gasoline without the need for modification to engine components.

The chart above is only a limited list of the biofuels available, covering only the most popular and widely used. It is worth nothing that ethanol is found in almost all gasoline mixtures. In Brazil, gasoline contains at least 95% ethanol. In other countries, ethanol usually makes up between 10 and 15% of gasoline.

Biofuel versus Fossil Fuel

Biofuels are not new. In fact, Henry Ford had originally designed his Model T to run on ethanol. There are several factors that decide the balance between biofuel and fossil fuel use around the world. Those factors are cost, availability, and food supply.

All three factors listed above are actually interrelated. To begin, the availability of fossil fuels has been of concern almost from day one of their discovery. Pumping fuel from the ground is a difficult and expensive process, which adds greatly to the cost of these fuels. Additionally, fossil fuels are not renewable, which means they will run out at some point. As our ability to pump fossil fuels from the ground diminishes, the available supply will decrease, which will inevitably lead to an increase in price.

It was originally thought that biofuels could be produced in almost limitless quantity because they are renewable. Unfortunately, our energy needs far out-pace our ability to grown biomass to make biofuels for one simple reason, land area. There is only so much land fit for farming in the world and growing biofuels necessarily detracts from the process of growing food. As the population grows, our demands for both energy and food grow. At this point, we do not have enough land to grow both enough biofuel and enough food to meet both needs. The result of this limit has an impact on both the cost of biofuel and the cost of food. For wealthier countries, the cost of food is less of an issue. However, for poorer nations, the use of land for biofuels, which drives up the cost of food, can have a tremendous impact.

The balance between food and biofuel is what keeps the relatively simple process of growing and making biofuels from being substantially cheaper than fossil fuel. When this factor is combined with an increased ability (thanks to advances in technology) to extract oil from the ground, the price of fossil fuel is actually lower than that of biofuel for the most part.

The Carbon Equation: Would Biofuels Contribute to Global Warming?

Assuming we can overcome the problem of biofuels interrupting the food supply (such as growing algae in the ocean), can we overcome the problem of biofuels contributing to global warming? The answer, surprisingly, may be yes.

It is true that biofuels produce carbon dioxide, which is a potent greenhouse gas and the one most often blamed for global warming. However, it is also true that growing plants consumes carbon dioxide. Thus, the equation becomes a simple balancing act. If the plants we grow utilize the same amount of carbon dioxide that we produce, then we will have a net increase of zero and no global warming. How realistic is this view?

It may seem like a simple matter to only produce as much carbon dioxide as plants use. After all, couldn’t we only burn biofuels and thus keep the equation balanced? Well, the math actually doesn’t quite add up. Research has shown that energy must be invested into producing crops and converting them into biofuels before any energy is obtained. A 2005 study from Cornell University found that producing ethanol from corn used almost 30% more energy than it produced. In other words, you can’t produce a perpetual motion machine using biofuels because you lose the energy you invest in creating them in the first place. In fact, you can’t even break even.

The other problem that we run into with biofuels is that carbon dioxide is not the only greenhouse gas we have to worry about. Other chemicals, like nitrous oxide, are also greenhouse gases and growing plants using fertilizer produces a lot of nitrous oxide. Basically, fertilizer contains nitrogen, which plants need to grow. However, most plants cannot convert molecular nitrogen into the elemental nitrogen they need. For this process, plants rely on bacteria. As it turns out, bacteria not only produce nitrogen that plants can use, they also produce nitrogen products like nitrous oxide, and probably more than was previously thought. The net result is that we may be balancing the CO2 ­ equation by using biofuels, but we are unbalancing the N2O part of the equation and still causing global warming.

The Future of Biofuel

A decade ago, subsidies for biofuel growth and development in many countries (especially the U.S.) were high. However, better understanding of global warming, increased awareness of the fragility of the food supply, and a general trend toward “greener” alternatives have all led to a decline in the popularity of biofuels. In 2011, The U.S. Senate voted 73 to 27 to end tax credits and trade protections for corn-based ethanol production. As the second largest producer of ethanol, this is a substantial move that reflects the changing pressures on our energy needs and shifted focus to environmentally friendly energy sources.

Biodiversity and Biofuels

There is one last problem presented by biofuels that needs to be addressed: biodiversity. Biodiversity refers to the variety of different living things in an environment. For instance, if you grow only sweet corn in a field, you have low biodiversity. If, however, you grow sweet corn, dent corn, flint corn, flour corn, and popcorn, then you have high biodiversity. Why should we care?

Growing a single type of corn is easier for producing biofuels because we can select that type that yields the best raw product, is easiest to grow, and which requires the least amount of water and other resources. This sounds great, but then down side to this is that pests that eat this type of corn will begin to proliferate. What is worse, if we spray with pesticide to kill these pests, some will inevitably be resistant to the pesticide. Over time, these pests will grow in number and we will be left with pests that are resistant to our chemical defenses. In the end, we have a bigger problem that what we started with and probably no corn because the new “super pest” ate it all.

Biodiversity is important to ensuring that pests do not grow out of control. The type of farming needed to produce large quantities of biofuels is generally not amendable to high levels of biodiversity. This presents a fundamental problem in producing biofuels that is enhanced by the fact that “super pests” produced in the effort to grow biofuels can also threaten food crops.

EE-Unit-V Environmental Biotechnology

Environmental biotechnology is a system of scientific and engineering knowledge related to the use of microorganisms and their products in the prevention of environmental pollution through biotreatment of solid, liquid, and gaseous wastes, bioremediation of polluted environments, and biomonitoring of environment and treatment processes. The advantages of biotechnological treatment of wastes are as follows: biodegradation or detoxication of a wide spectrum of hazardous substances by natural microorganisms; availability of a wide range of biotechnological methods for complete destruction of hazardous wastes; and diversity of the conditions suitable for biodegradation. The main considerations for application of biotechnology in waste treatment are technically and economically reasonable rate of biodegradability or detoxication of substances during biotechnological treatment, big volume of treated wastes, and ability of natural microorganisms to degrade substances. Type of biotreatment is based on physiological type of applied microorganisms, such as fermenting anaerobic, anaerobically respiring (anoxic), microaerophilic, and aerobically respiring microorganisms. All types of biotechnological treatment of wastes can be enhanced using optimal environmental factors, better availability of contaminants and nutrients, or addition of selected strain(s) biomass. Bioaugmentation can accelerate start-up or biotreatment process in case microorganisms, which are necessary for hazardous waste treatment, are absent or their concentration is low in the waste; if the rate of bioremediation performed by indigenous microorganisms is not sufficient to achieve the treatment goal within the prescribed duration; when it is necessary to direct the biodegradation to the best pathway of many possible pathways; and to prevent growth and dispersion in waste treatment system of unwanted or nondetermined microbial strain which may be pathogenic or opportunistic one. Biosensors are essential tools in biomonitoring of environment and treatment processes. Combinations of biosensors in array can be used to measure concentration or toxicity of a set of hazardous substances. Microarrays for simultaneous qualitative or quantitative detection of different microorganisms or specific genes in the environmental sample are also useful in the monitoring of environment.

Environmental protection using Biotechnology 

  1. The surroundings around us are termed as ‘environment’. Our environment includes the abiotic component (the non living) and biotic component (the living). The abiotic environment includes air, water and soil; and the biotic environment consists of all living organisms such as plants, animals and microorganisms. Environmental pollution broadly refers to the presence of undesirable substances in the environment which are harmful to man and other organisms. There has been a significant increase in the levels of harmful environmental pollution mostly due to direct or indirect human activities in recent past. The major sources of environmental pollution are industries, agricultural and other anthropogenic and biogenic sources etc. The pollutants are chemical, biological and physical in nature.
  2. Controlling the environmental pollution and the conservation of environment and biodiversity and controlling environmental pollution are the major focus areas of all the countries around the world. In this context, the importance and impact of biotechnological approaches and the implications of biotechnology has to be thoroughly evaluated. There have been serious concerns regarding the use of biotechnological products and the impact assessment of these products due to their interaction with the environmental factors. A lobby of the environmentalists has expressed alarm on the release of genetically engineered organisms in the atmosphere and have stressed on thorough investigation and proper risk assessment of theses organisms before releasing them in to the environment. The effect of the effluents from biotechnological companies is also a cause of concern for everyone. The need of the hour is to have a proper debate on the safety of the use of the biotechnological products. The efforts are not only on to use biotechnology to protect the environment from pollution but also to use it to conserve the natural resources. As we all know that microorganisms are known natural scavengers so the microbial preparations (both natural as well as genetically engineered) can be used to clean up the environmental hazards.
  3. Biotechnology is being used to provide alternative cleaner technologies which help to further reduce the hazardous environmental implications of the traditional technologies. Some of the well known examples and mechanisms are:

(i) Some fermentation technologies have some serious environmental implications. Various biotechnological processes have been devised in which all nutrients introduced for fermentation are retained in the final product, which ensures high conversion efficiency and low environmental impact.

(ii) In paper industry, the pulp bleaching technologies are being replaced by more environmentally friendly technologies involving biotechnology. The pulp processing helps to remove the lignin without damaging valuable cellulosic fibres but the available techniques suffer from the disadvantages of high costs, high energy use and corrosion. A lignin degrading and modifying enzyme (LDM) was isolated from Phanerochaete chrysosporum and was used, which on one hand, helped to reduce the energy costs and corrosion and on the other hand increased the life of the system. This approach helped in reducing the environmental hazards associated with bleach plant effluents.

(iii) In Plastic industry, the conventional technologies use oil based raw materials to extract ethylene and propylene which are converted to alkene oxides and then polymerized to form plastics such as polypropylene and polyethylene. There is always the risk of these raw materials escaping into the atmosphere thereby causing pollution. Using biotechnology, more safer raw materials like sugars (glucose) are being used which are enzymatically or through the direct use of microbes converted into alkene oxides.e.g. Methylococcus capsulatus has been used for converting alkene into alkene oxides.

(iv) Bioremediation is defined as ‘the process of using microorganisms to remove the environmental pollutants where microbes serve as scavengers. The removal of organic wastes by microbes leads to environmental cleanup. The other names/terms used for bioremediation are bio-treatment, bio-reclamation, and bio-restoration. The term “Xenobiotics” (xenos means foreign) refers to the unnatural, foreign and synthetic chemicals such as pesticides, herbicides, refrigerants, solvents and other organic compounds. The microbial degradation of xenobiotics also helps in reducing the environmental pollution. Depending on the method followed to clean up the environment, the bioremediation is carried out in two ways:

(a) In situ bioremediation – involves a direct approach for the microbial degradation of xenobiotics at the site of pollution which could be soil, water etc. The in situ bioremediation is generally used for clean up of oil spillages, beaches etc.;

(b) Ex-situ bioremediation – In this the waste and the toxic material is collected from the polluted sites and the selected range of microorganisms carry out the bioremediation at designed place. This process is an improved method over the in situ bioremediation method.

(v) Pseudomonas which is a soil microorganism effectively degrades xenobiotics. Different strains of Pseudomonas that are capable of detoxifying more than 100 organic compounds (e.g. phenols, biphenyls, organophosphates, naphthalene etc.) have been identified. Some other microbial strains are also known to have the capacity to degrade xenobiotics such as Mycobacterium, Alcaligenes, Norcardia etc.

  1. In recent years, efforts have been made to create genetically engineered microorganisms to enhance bioremediation. This is done to overcome some of the limitations and problems in bioremediation. These problems are: a) Sometimes the growth of microorganisms gets inhibited or reduced by the xenobiotics. b) No single naturally occurring microorganisms has the capability of degrading all the xenobiotics present in the environmental pollution. c) The microbial degradation is a very slow process. d) Sometimes certain xenobiotics get adsorbed on to the particulate matter of soil and thus become unavailable for microbial degradation.
  2. As the majority of genes responsible for the synthesis of enzymes with biodegradation capability that are located on the plasmids, the genetic manipulations of plasmids can lead to the creation of new strains of bacteria with different degradative pathways. Well known example of genetic manipulations of plasmids is development of ‘Superbug’, which is used for degrading a number of hydrocarbons of petroleum simultaneously such as camphor, octane, xylene, naphthalene etc.
  3. We all know that, carbon dioxide (CO2) is the main cause of green house effect and rise in the atmospheric temperature. There is a steady increase in the CO2 content due to continuous addition of CO2 from various sources particularly from industrial processes. It is very clear that the reduction in atmospheric CO2 concentration assumes significance. Biotechnological methods have been used to reduce the atmospheric CO2 content at two levels:

(a) Photosynthesis- Plants utilize CO2 during the photosynthesis which reduces the CO2 content in the atmosphere;

(b) Biological Calcification- Certain deep sea organisms like corals, green and red algae store CO2 through a process of biological calcification. As the CaCO3 gets precipitated, more and more atmospheric CO2 can be utilized for its formation.

  1. The sewage is treated to get rid of these undesirable substances by subjecting the organic matter to biodegradation by microorganisms. The biodegradation involves the degradation of organic matter to smaller molecules, such as CO2, NH3, PO4 etc., and requires constant supply of oxygen. The process of supplying oxygen is expensive, tedious, and requires a lot of expertise and manpower. These problems are overcome by growing micro-algae in the ponds and tanks where sewage treatment is carried out. The algae release the O2 while carrying out the photosynthesis which ensures a continuous supply of oxygen for biodegradation. The algae are also capable of adsorbing certain heavy toxic metals due to the negative charges on the algal cell surface which can take up the positively charged metals. The algal treatment of sewage also supports fish growth as algae are a good source of food for fishes.
  2. The environmental impact assessment system requires proponents to foresee possible environmental impacts when a development project is being planned, and to conduct an environmental assessment. However, debate continues on exactly what kinds of environmental protection measures are needed and how they should be integrated into a given project to achieve desirable environmental results. Actions to deal with global warming and to prevent ozone layer depletion are gaining momentum, but currently available technologies may not be enough to meet the required targets. Technological advances are needed in order to make progress in solving these issues, as well as with the problem of dioxins. New developments are also needed in technologies for pollution removal and environmental restoration, in cases where environmental pollution has already been generated or is already accumulating in the environment.
Environmental biotechnology – serving the future

Like white biotechnology, environmental biotechnology, often referred to as “grey biotechnology”, also focuses on sustainability. For instance, environmental biotechnology deals with the treatment of sewage water, the purification of exhaust gas or the decontamination of soils or ground water using specific microorganisms.

The use of organisms for the removal of contamination or pollutants is generally referred to as bioremediation. Originally, bioremediation was mainly used in cleanup operations, including the decomposition of spilt oil or slagheaps containing radioactive waste. In addition, bioremediation is also the method of choice when solvents, plastics or heavy metals and toxic substances like DDT, dioxins or TNT need to be removed.

Bioadsorption processes using newly developed bioadsorbers made from renewable materials are currently being developed. These adsorbers function as ion exchangers and are used in the elimination and disposal of toxic heavy metals. The industrial use of mineral resources leads to the drastic accumulation of these pollutants in the biosphere. The new bioadsorbers are used for the elimination of heavy metals and radionuklids from industrial wastewater, ore mine wastewater, seepage water from dumpsites or wastewater from nuclear power stations.

EE-Unit-IV Noise Pollution

Sound, a normal feature of our life, is the means of communication and enter­tainment in most animals, including human beings. It is also a very effective alarm system. A low sound is pleasant whereas a loud sound is unpleasant and is commonly referred to as ‘noise’. Noise can be defined as an unpleasant and unwanted sound.

Whether a given sound is as pleasant as music or as unpleasant as noise de­pends on its loudness, duration, rhythm and the mood of the person. But loud­ness is definitely the most significant criterion which converts sound into noise. Exposure to loud noise is indeed annoying and harmful too.

Noise is a physical form of pollution and is not directly harmful to the life sup­porting systems namely air, soil and water. Its effects are more directly on the receiver i.e. man. Noise pollution is the result of modern industrialized urban life and congestion due to over population.

Even though noise pollution is not fatal to human life, yet its importance cannot be overlooked because repeated exposure to noise reduces the sleeping hours and productivity or efficiency of a human being. It affects the peace of mind and invades the privacy of a human being. The importance of noise pollution as environmental problem is being recog­nised as the ill effects of noise on human health and environment are becoming evident with each passing day.

Sources of Noise Pollution:

Major causes / sources of noise pollution are:

(i) Industrial Sources:

Progress in technology (industrialization) has resulted in creating noise pollu­tion. Textile mills, printing presses, engineering establishments and metal works etc. contribute heavily towards noise pollution. In industrial cities like Kolkata, Ludhiana, Kanpur etc., often the industrial zones are not separated from the residential zones of the city especially in the case of small scale industries.

These operate from workshops located on the ground floors of the residential areas and cause annoyance, discomfort and irri­tation to the residents exposed to the noise that is inevitably produced. The situation is much better in modern planned cities like Chandigarh where the industrial area is kept away from the residential areas and both are sepa­rated from each other by a sufficiently wide green belt.

(ii) Transport Vehicles:

Automobile revolution in urban centers has proved to be a big source of noise pollution. Increasing traffic has given rise to traffic jams in congested areas where the repeated hooting of horns by impatient drivers pierce the ears of all road users.

Noise from airplanes constitutes an increasing serious problem in big cities like Delhi & Mumbai. Airport situated in the vicinity of population centres and the air planes pass over residential areas. Heavy trucks, buses trains, jet-planes, motor-cycles, scooters, mopeds, jeeps—the list of vehicles is endless but the outcome is same — noise pollution.

(iii) Household:

The household is an industry in itself and is a source of many indoor noises such as the banging of doors, noise of playing children, crying of infants, moving of furniture, loud conversation of the inhabitants etc. Besides these are the entertainment equipment in the house, namely the radio, record-players and television sets. Domestic gadgets like the mixer-grinders, pressure cookers, desert coolers, air- conditioners, exhaust fans, vacuum cleaners, sewing and washing machines are all indoor sources of noise pollution.

(iv) Public Address System:

In India people need only the slightest of an excuse for using loud speakers. The reason may be a religious function, birth, death, marriage, elections, dem­onstration, or just commercial advertising. Public system, therefore, contrib­utes in its own way towards noise pollution.

(v) Agricultural Machines:

Tractors, thrashers, harvesters, tube wells, powered tillers etc. have all made agriculture highly mechanical but at the same time highly noisy. Noise level 90 dB to 98 dB due to running of farm machines have been recorded in the state of Punjab.

(vi) Defence Equipment:

A lot of noise pollution is added to the atmosphere by artillery, tanks, launching of rockets, explosions, exercising of military airplanes and shooting practices. Screams of jet engines and sonic booms have a deafening impact on the ears and in extreme cases have been known to shatter the window panes and old dilapidated buildings.

(vii) Miscellaneous Sources:

The automobile repair shops, construction-works, blasting, bulldozing, stone crushing etc. are other sources of noise pollution.

Effects of Noise:

Noise is generally harmful and a serious health hazard. It has far-reaching consequences and has many physical, physiological as well as psychological ef­fects on human beings.

(i) Physical Effects:

The physical manifestation of noise pollution is the effect on hearing ability. Repeated exposure to noise may result in temporary or permanent shifting of the hearing threshold of a person depending upon the level and duration of exposure. The immediate and acute effect of noise pollution is impairment of hearing (i.e. total deafness.)

Human ears have sensory cells for hearing. If these cells are subjected to re­peated sounds of high intensity before they have an opportunity to recover fully, they can become permanently damaged leading to impairment of hearing. Be­sides the sensory cells, the delicate tympanic membrane or the ear drum can also be permanently damaged by a sudden loud noise such as an explosion.

(ii) Physiological Effects:

The physiological manifestations of noise pollution are several as mentioned be­low:

(a) Headache by dilating blood vessels of the brain.

(b) Increase in the rate of heart-beat.

(c) Narrowing of arteries.

(d) Fluctuations in the arterial blood pressure by increasing the level of choles­terol in the blood.

(e) Decrease in heart output.

(f) Pain in the heart.

(g) Digestive spasms through anxiety and dilation of the pupil of the eye, thereby causing eye-strain.

(h) Impairment of night vision.

(i) Decrease in the rate of colour perception.

(j) Lowering of concentration and affect on memory,

(k) Muscular strain and nervous breakdown.

(l) Psychological Effect

The psychological manifestations of noise pollution are:

(a) Depression and fatigue which considerably reduces the efficiency of a person.

(b) Insomnia as a result of lack of undisturbed and refreshing sleep

(c) Straining of senses and annoyance as a result of slow but persistent noise from motorcycles, alarm clocks, call bells, telephone rings etc.

(d) Affecting of psychomotor performance of a person by a sudden loud sound

(e) Emotional disturbance

For a talkative person, the most important effect of noise pollution would in­variably be that noise interferes with our conservation. So, noise is annoying and the annoyance depends on many factors not merely the intensity of the sound but also repetition, because even a sound of small intensity (e.g. dripping tap or clicking of clock) may become annoying, simply by repetition.

Some of the well- known effects of noise on human beings and the relation of noise pollution level and its harmful effects are shown in Table 1

Table 1. Noise Pollution Level and its Harmful Effects:

Level (in db) Effects
up to 23 ……………………….. No disturbance
30—60 ……………………….. Stress, tension, psychological (illness, heart attact) effects especially at upper range.
60—90 ……………………….. Damage to health, psychological and vegetative (disturbance in stomach-gall function, pains in muscles, high blood pressure, disturbance in sleep­ing)
60—120 ……………………… Damages to health and ontological (ear diseases) ef­fects
Above 120 ……………………. Painful effects in long run.