Why Shouldn't Men Wear Shalwar-Kameez [archived]

Why Shouldn't Men Wear Shalwar-Kameez [archived]

Middle age is defined as the time in life when you stop growing at both ends and begin growing in the middle. If you have ever tried to slip into jeans 2-inch too small in waist, you know what relief it is wearing our traditional shalwar-kameez, a de facto national symbol of the Pakistani male. However, this comfort has a price tag. And time has now come to evaluate if we really need this cross between the traditional pajama (a discovery of the Indian subcontinent now popular throughout the world) and the attire of the sultan's harem girls, the Turkish pants, long outlawed in Turkey.

The first problem with shalwar-kameez is that it resembles too much of what we often wear at home or during night; slipping into clothes different from our dreaming suits is needed to change our professional mind-set. This attire is also dangerous when working with moving machinery and you can easily trip. Wearing shalwar-kameez also keeps you oblivious from the ever-growing bulge in the middle. You can always loosen the shalwar a bit and the long fall of kameez keeps it all hidden, well tucked in. You would never know when you ballooned from waist of 30 to 44. Also, since it hides body curves, it neither lets you lament nor gloat about how you look—a motivation to maintaining good health.

A phenomenon unique to the shalwar-kameez users is the ease and consistency with which they reach for their crotch. This best pastime of the Pakistani male is vividly displayed at bus stops and offices. Embarrassing to the core, the habit can be damaging to the prostate gland.

Finally, men should not wear shalwar-kameez because it resembles a woman's dress. What is the alternate then? Why not adopt the practical modern Western attire? And before you slap the national identity crap on this opinion, remember that the entire West wears pajamas, straight out of our inventory. I think time has come for us to find our identity in better things than in this outdated, physically hazardous, sleepwear that promotes poor attitudes and bad health habits.

[17 March 1995 The Daily Dawn]

On Leaving the Company I founded, Adello Biologics

adello.png

More than a decade ago, I founded Therapeutic Proteins Inc (now Adello Biologics) with a vision of making affordable biosimilars in USA. Today, Adello has products under approval by the FDA, and uniquely that first therapeutic protein manufactured using a single-use technology that I invented. I wish to thank the angel investors, who gave me a head start, Jim Berens who placed his trust in my passion to help me build clean rooms, Steve Einhorn for providing the first large funding, and Chirag, Chintu and Tushar Patel, who helped me complete a fully integrated GMP facility in Chicago. It was a challenging ride, but we came through.  Now Adello is in good hands with Peter and Mike and it is about time that the baton is transferred to new hands, who are much better at handling supply chain issue than I could. I leave the company I founded with an experience of a lifetime and an immense sense of humility.

Novel Approaches of Demonstrating Bioequivalence

Sarfaraz K. Niazi, Ph.D., SI, FRAS, FPAMS, FACB, Professor of Biopharmaceutical Sciences, Adj, University of Illinois College of Pharmacy, Chicago, Illinois. Founder Adello Biologics, LLC. www.niazi.com; niazi@niazi.com; +1-312-297-0000

A Public Meeting was held by FDA on 17 July 2017 on Administering the Hatch-Waxman Amendments[1]: Ensuring a Balance Between Innovation and Access. The comments below were submitted as Comments on this Public Meeting Reference 1k1-8yz7-ttos

The Drug Price Competition and Patent Term Restoration Act (Public Law 98-417) of 1984 has made drugs more accessible. One of the key elements of the Act is the demonstration of bioequivalence of generic products containing drugs that are not eligible for bioequivalence waiver.

Bioequivalence testing originated in the early 1970s with the ±20 rule. According to a separate study, “the shortcomings of this approach were immediately evident, since such a criterion would theoretically allow the parameters of generic product A to differ from the reference (innovator) product by +20%, while allowing the parameters of generic product B to differ from the reference product by -20%. The net difference between the generic products A and B would then be as much as 40% and, therefore, beyond the limits of therapeutic equivalence as originally conceived.”[2] In response, the FDA adopted a powered approach in the early 1980s. However, both approaches were discontinued by the FDA in 1986 because of public concern about bioequivalence and were subsequently replaced by the 90% CI approach in 1992, which remains the current criteria for bioequivalence decisions.[3] How closely the FDA sticks to this guidance is seen in the current bioequivalence testing guidance that states[4] that “We recommend that applicants not round off CI values; therefore, to pass a CI limit of 80 to 125 percent, the value should be at least 80.00 percent and not more than 125.00 percent.” [The upper limit of 125.00 percent comes from the strict 20.00% limit on each end (100/125=0.80)]. Why is there a need to follow this limit so strictly? According to the FDA[5], this range is based on a clinical judgment that a test product with bioavailability that falls outside this range should be denied market access. A 90% CI is used, since a 5% statistical error is allowed at both the upper and the lower limits. Therefore, the total error is 10%, generating the 90% CI. Understanding a predefined range is much more intuitive and easier to grasp than the reality of multiple PK parameters having to fit within a narrow CI.

Now that the FDA has initiated its efforts to bring innovation to the Act, I would like to suggest that we re-examine the current the guidance on how to meet the BA and BE requirements set forth in 21 CFR part 320 as they apply to dosage forms intended for oral administration or non-orally administered drug products when reliance on systemic exposure measures is suitable to document BA and BE (e.g., transdermal delivery systems and certain rectal and nasal drug products). A suggestion for this change comes in The Patient Protection and Affordable Care Act (Affordable Care Act), signed into law by President Obama on March 23, 2010, that amends the Public Health Service Act (PHS Act) to create an abbreviated licensure pathway for biological products that are demonstrated to be “biosimilar” to or “interchangeable” with a FDA-licensed biological product. This pathway is provided in the part of the law known as the Biologics Price Competition and Innovation Act (BPCI Act)[6]. Under the BPCI Act, a biosimilar product is a biological product that is approved based on a showing that it is highly similar to a FDA-approved biological product, known as a reference product, and has no clinically meaningful differences in terms of safety and effectiveness from the reference product.

What I am proposing is that we add the consideration of “clinically meaningful” to the bioequivalence testing of generic products, instead of using a fixed range of bioequivalence, to make this test more relevant. An FDA analysis shows that for the submissions between 1996 to 2001 for highly variable drugs, the mean AUC varied by 10%,[7] while the range allowed was 20%. Such statements can be misleading since they do not represent the granularity of data where even 10% variability can be too much and the situations where a 30% variability will still yield a clinically equivalent product. The FDA has recently begun discussion of narrow therapeutic index (NTI) drug bioequivalence[8]. Four new draft guidances are posted online recommending replicate design studies for NTI drugs including tacrolimus, phenytoin tablet, levothyroxine, and carbamazepine. The movement from “one size fits all” to product-specific standards is a sign of the maturation of the generic drug program. However, the emphasis in these recent approaches has been on the statistical design and not the margin that remains an iron-clad acceptance criterion. Replicate design helps reduce the size of the study, yet it does not add a clinically meaningful component to the study.

I am recommending that the FDA allow developers to justify margin that they can demonstrate to provide a clinically meaningful comparison that may include novel in vitro testing methods or any other approaches that have not been explored mainly because of the fixation of complying with the equivalence margins as currently mandated. The roots of this recommendation come from the 21 CFR Part 320[9] wherein: “(e) Bioequivalence means the absence of a significant difference in the rate and extent to which the active ingredient or active moiety in pharmaceutical equivalents or pharmaceutical alternatives becomes available at the site of drug action when administered at the same molar dose under similar conditions in an appropriately designed study.” Since the site of drug action as cited here is in most case not known and rarely sampled, blood level PK studies were suggested as a surrogate, not a primary test, to demonstrate bioequivalence. In fact, given that blood level studies add substantially more variability in the assessment of bioequivalence, better means of establishing bioequivalence are the in vitro means wherein a generic product is more likely to show a discernible difference. One such means is demonstration of thermodynamic equivalence for which a Citizens Petition is already filed and open for comments (https://www.regulations.gov/comment?D=FDA-2007-P-0055-0004) .

Thermodynamic equivalence (TE) is by another name, the “basis” for biowaivers, in place for years. For a highly soluble drug, the barrier Delta G is small, overcoming any differences between two products. I am expanding this concept to drugs subject to blood level studies. Why would a drug product fail in BE, when it has the same chemical entity? It is inevitably the release profile at the site of delivery, since this point forward, all factors apply equally. Dissolution rate testing is the best example of measuring chemical potential and while it works well for products with small DG, it fails for drugs that are not released instantly. Creating a matrix of dissolution profiles, independent of any physiologic conditions, may be able to discern the differences not picked up by current dissolution testing. The TE provides a better estimate of the equivalence of the generic and the reference product at the site of administration, a more important attribute since the rest of the complexity and variability stays common with both the generic and reference product. The test of TE can be extended to provide a continuous monitor of bioequivalence of the generic drug over the lifecycle of the reference product, an attribute that is currently not required for testing.

The Public Law 98-417 has served the US citizens well; now it is time that we examine its components utilizing the novel approaches to improving upon its utility, and it requires abandoning the arbitrary equivalence criteria and a redundant blood level study that was a surrogate in the first place, and need not be retained unless proven to bring a greater understanding of any clinically meaningful difference between a generic and a reference product.

[1] https://www.fda.gov/drugs/developmentapprovalprocess/smallbusinessassistance/ucm069959.htm

[2] Henderson JD, Esham RH. Generic substitution: issues for problematic drugs. South Med J. 2001 Jan;94(1):16-21.

[3] https://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm389370.pdf

[4] https://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm389370.pdf

[5] Buehler G. History of bioequivalence for critical dose drugs. FDA. fda.gov/downloads/%20AdvisoryCommittees/…/UCM209319.pdf. Published April 13, 2010. Accessed June 6, 2016.

[6] https://www.fda.gov/drugs/developmentapprovalprocess/howdrugsaredevelopedandapproved/approvalapplications/therapeuticbiologicapplications/biosimilars/

[7] https://www.google.com/search?q=how+did+fda+decide+on+80-125%25+limit&oq=how+did+fda+decide+on+80-125%25+limit&aqs=chrome..69i57.10095j0j4&sourceid=chrome&ie=UTF-8

[8] https://www.fda.gov/ForIndustry/UserFees/GenericDrugUserFees/ucm500577.htm

[9] https://www.gpo.gov/fdsys/pkg/CFR-2009-title21-vol5/pdf/CFR-2009-title21-vol5-part320.pdf

FDA Accepts Our Filgrastim Biosimilar Application

It has been a long journey; I founded the company and grew it into a fully integrated, pure-play, US-based company biosimilars company. Our 351(k) application was accepted by FDA for review (Adello Biologics); several more products are underway, now that we have a great team in place. There is no substitute to learning it the hard way. Over next few weeks, I am giving several talks to share my experience and also plans on how to expedite development of biosimilars. niazi@niazi.com and www.niazi.com

Fingerprint-like Non-inferiority Similarity Demonstration--A New Invention

The FDA, now now EMA, emphasize analytical similarity as the pivotal step to minimize residual uncertainty, but creating a plan to demonstrate fingerprint-like similarity has been difficult because of improper use of testing methods, testing protocol, statistical modeling and the knowledge of the variability within and between lots. I have addressed this issue in my several books including the Biosimilars and Interchangeable Biologics: Tactical Issues (https://www.niazi.com/handbooks-and-technical-books) but now I have invented a new method that makes fingerprint-like similarity testing possible. This should help bringing biosimilars faster to market by assuring least residual uncertainty in the early phases of development. What will make biosimilars accessible (available and affordable) is a mindset that supports FDA thinking on scientific approaches that the Agency has demonstrated repeatedly; the ball is in the court of developers to give FDA reasons to approve biosimilars without lengthy, expensive clinical trials. I will be happy to provide more information and details to anyone wishing to examine their approach. 

While the concept of non-inferiority testing has been applied to clinical trials, this concept is yet to be applied to analytical similarity testing; the testing protocols as shared by FDA in reviewing three biosimilars, filgrastim, insulin glargine and bevacizumab show a more conventional approach to demonstrate similarity, not non-inferiority. The CQAs for the latter are different than those used of demonstrating similarity. 

My inventions include methods to demonstrate difference through a process of thermodynamic extrapolation. I have shared the novelty aspect as well as the non-inferiority trial designs with regulatory agencies and they have all encouraged this approach. I will soon be filing a CP to enforce these new technology and the proposed approach. Being able to demonstrate non-inferiority is a double-edge sword; biosimilar developers can use it to show fingerprint-like similarity but the LBP companies can use these methods to show lack of similarity as well, whether it is clinically meaningful or not.

I am strong that urging the scientific community in the biologics arena to come up with better tests to allow FDA to approve products without requiring phase 3 studies in those cases where these studies can take long time to complete and add to the cost of biosimilar development.

Making Alcoholic Beverages Unique and Affordable--New Patent Issued 29 August 2017.

The alcoholic beverage industry is bigger than the pharmaceutical industry, over $1.5 Trillion worldwide, yet the technology for aging alcohol, letting it sit in a barrel, dates back thousands of years. While I have developed many technologies with dozens of patents to make biological drugs more affordable, and these technologies are used to reduce the COGs of biosimilars, now I am reporting my new invention for continuous aging of alcoholic beverages, eliminating wood barrels and forcing the natural process by thousands of times by manipulating the thermodynamics of the Fick's law of diffusion. Now for the first time, we can introduce proprietary tastes using several types of wood simultaneously and produce the desire product at a much lower cost. It is about time that we become more creative with what is perhaps the largest product in the world. Yes, we should make drugs affordable, then why not the beverage that makes billions happy ever day. This patent is one of about a dozen patents on fast aging of alcoholic beverages including wines, whiskies and others, allow storage in multiple-use bottles without affecting the quality of content and many more inventions to change a technology that has seen little change over 7000 years.

A Biosimilar Delayed is a Biosimilar Denied

After decades of experience in taking biosimilars from cell line to market and after facing both success and failures, I feel qualified to talk about what not to do when it comes to making biosimilars accessible (available and affordable). I am inviting all of my friends (and I have many) to join me at this conference where I provide a step-by-step approach to succeeding in securing FDA approval of biosimilars. This is an experience based on a first-hand working knowledge of how the FDA lays out its expectations. I have been instrumental in contributing to FDA guidelines and detailed treatises on the science and technology of biosimilar approvals under 351k and 505b2 provisions. 

Thermodynamic Equivalence to Demonstrate Bioequivalence

The 1980s saw emergence of generic drugs in the US that has saved hundreds of billions of dollars to patients and improved accessibility to drugs. The statute that created this abbreviated pathway for approval of generic drugs stated that therapeutic equivalence means same concentration of active drug at the site of action, an evaluation that was neither possible nor practical. So, the FDA recommended using blood level studies as a surrogate test demonstrating bioequivalence. Later, FDA agreed to remove this requirement, the biowaiver, for drugs that are highly soluble. I am now introducing a new concept--thermodynamic equivalence, in lieu of bioequivalence testing. 

Thermodynamic equivalence (TE) is by another name, the “basis” for biowaivers, in place for years. For a highly soluble drug, the barrier Delta G is small, overcoming any differences between two products. I am expanding this concept to drugs subject to blood level studies. Why would a drug product fail in BE, when it has the same chemical entity? It is inevitably the release profile at the site of delivery, since this point forward, all factors apply equally. Dissolution rate testing is the best example of measuring chemical potential and while it works well for products with small DG, it fails for drugs that are not released instantly. Creating a matrix of dissolution profiles, independent of any physiologic conditions, such as a 3x3 matrix, may be able to discern the differences not picked up by current dissolution testing. This is not a theoretical suggestion, it  is already in practice, such as in the comparison of biologics, where a different approach to matching CQAs allows FDA to approve them without requiring phase 3 studies. The industry should attempt now to use this concept to request biowaivers, particularly for highly complex product designs to reduce cost and time to market.

It was after years of similar discussion that the FDA agreed to look into the concept of TE that can be continually used to assure life-cycle therapeutic equivalence. The FDA has now opened up this discussion agreeing that the concept needs to be explored further. 

http://www.prnewswire.com/news-releases/pharmaceutical-scientist-inc-fda-calls-for-public-comments-on-bioequivalence-testing-300489368.html?tc=eml_cleartime

From writing the suggestions for the first guidance for bioequivalence to biosimilarity testing, I have been engaged with FDA and I am now confident that we are entering a phase of scientific reality that can bring another phase of reduced burden on development of drugs.

 

Why do we exist?

About 80,000 years ago, a variant of the Homo genus, the Sapiens, came out of East Africa and spread across the globe, annihilating other Homo genus title holders, many large animal species and now finally devouring the environment that supports his existence. The renewal of species is a common occurrence in the mega plan of evolution and tomorrow’s Homo genus species will be very different from us, but today, we are not able to predict, how? The span of 80,000 years is not enough to produce any significant changes in our genetic code, one of which made us curious and inquisitive—perhaps as a means of protecting us against the unknown—the survival of our species depended on it. Some day when we will not fear walking in a dark room, we would have overcome this genetic coding, not now. Our curiosity and inquisition were endless 80,000 years ago, and it remains so today, except we have a better vocabulary to frame our questions.

The first question that came to the mind of the foraging Sapiens was: “How do we survive?” We had not yet domesticated crops or bred animals for food and lived off whatever came in our path as we moved around, mostly in groups of less than 100, for anything bigger than that caused a split and rise of another group that was not likely to be friendly to us. While our toolmaking skills were superior to Neanderthal and Erectus species, we excelled in organizing our groups. We realized early in our foraging times that some of us better at doing one task over others and that gave rise to what we call today, professions. Now, as tasks were assigned, it became difficult for us to leave the group as we became dependent on others in the group for our survival, and thus grew societies and kingdoms. Those who were good at ruling, found this to be a great profession, and thus came dictators, pharaohs, prophets, kings, and bigots to rise. You can now appreciate, how one question asked resulted in a creation of civilization.

Another issue that came way after we had assured our survival was “Where are we?” We could see the stars around us and had wanted to know our place in the arena of whatever was visible to us. Our condescending nature led us to believe that we are the focus of the Universe. Historically, the center of the Universe had been thought to be several locations. Many mythological and religious cosmologies included an Axis Mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BCE Greece, the geocentric model was developed based on astronomical observation, proposing that the center of the Universe lies at the center of a spherical, stationary Earth, around which the sun, moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it. In the early 20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe (which lacks a central point) that is expanding at all points. The reason why we resisted accepting that we are not the center of the Universe comes from realizing how small and insignificant we are? Narcissism bred into our genes has not yet left our construction.

The question, “Who are we?” created great tumult in human society because of its amorphous nature. The early foraging Sapiens are desiring to grow their community as produced con artists who sold gods and what better way to connect than by assuring that we are indeed the chosen people. The story caught on well, and every religion claims that they are the righteous ones, or else why would anyone believe in it? Today, most of us believe we are a creation of God, of one type or another kind of God. Moreover, despite the indisputable theory of evolution as proposed by Mr. Darwin in the mid-19th century. Evolution is a change in the heritable characteristics of biological populations over successive generations. Evolutionary processes give rise to biodiversity at every level of biological organization, including the levels of species, individual organisms, and molecules. All life on Earth shares a common ancestor known as the last universal common ancestor (LUCA), which lived approximately 4.1 billion years ago. Repeated formation of new species, change within species, and loss of species or extinction throughout the evolutionary history of life on Earth are demonstrated by shared sets of morphological and biochemical traits, including shared DNA sequences. [We still carry a few genetic sequences of the Neanderthals}. More than 99 percent of all species that ever lived on Earth are estimated to be extinct, and estimates of Earth's current species range from 10 to 14 million. Primates diverged from other mammals about 85 million years ago, and the Hominini tribe (humans, Australopithecines and another extinct biped genre, and chimpanzees) parted from the Gorillini tribe (gorillas) some 8-9 million years ago, and a couple of million years later, we further separated into more refined humans and biped ancestors.

The creation–evolution controversy is an ongoing, recurring cultural, political, and theological dispute about the origins of the Earth, of humanity, and of other life. Within the Christian world (not the fundamentalists) evolution by natural selection has been established as an empirical scientific fact stating that “evolution requires the creation of beings that evolve." Ironically, the rules of genetic evolutionary inheritance were first discovered by a Catholic priest, the Augustinian monk Gregor Mendel, who is known today as the founder of modern genetics. Most other religions totally deny evolution, as it threatens their very foundation. While the question, “who are we?” remains disputed, it has lost much of its stigma and no longer a thorny issue.

There remains just one question, “When did we begin?” Early shamans resolved this issue by relegating the responsibility to gods, and we were relatively satisfied with this resolution that it happened whenever we think it did. Aristotle taught that the universe had existed forever to avoid invoking divine intervention to create the universe as those who believed the universe had a beginning, used it as an argument for the existence of God as the first cause. To Aristotle, this was disturbing because that would raise the question of who created God, the omnipotent. An interesting philosophic argument goes as follows: “Can God create another God stronger than Him?”  If the answer is yes, our God is not omnipotent, and if the answer is no, then our God is not omnipotent.  According to the Book of Genesis talks about the creation of the world at 9 in the morning on October the 27th, 4,004 BC. 

In 1915, Einstein introduced a General Theory of Relativity, according to which, space and time were no longer absolute, no longer a solid background to events. Since time began with the beginning of the universe, the question what existed before that moment becomes redundant. It is like asking, what is north of the North Pole? The equations of General Relativity would break down at the singularity. What we do know today is that the universe is expanding, so if the galaxies are moving away from each other, they must have been close to each other at one time, taking us back to God, inevitably. Scientists proposed a theory of Steady State wherein as galaxies moved apart; new galaxies form from matter continually created throughout space. This way, the universe would have existed forever looking the same always. It could not be proven. Another attempt to avoid the universe having a beginning was the suggestion that there was a previous contracting phase, but because of rotation and local irregularities, the matter would not all fall to the same point. Observational evidence to confirm the idea that the universe had a very dense beginning came in October 1965 with the discovery of a faint background of microwaves throughout space. These microwaves are the same as those in your microwave oven, but very much less powerful. 

The Einstein's theory cannot predict how the universe will begin, but only how it will evolve once it has begun; the theory breaks down in the adamant gravitational fields in the early universe. However, if we combine this theory with quantum theory. We can get rid of the problem of time having a beginning. Suppose the start of the universe was like the North Pole of the earth, with degrees of latitude playing the role of time. The universe would start as a point at the North Pole. As one moves south, the circles of constant latitude, representing the size of the universe, would expand. To ask what happened before the beginning of the universe would become a meaningless question because there is nothing north of the North Pole. 

Moreover, now comes the final question, “Why do we exist,” a matter that is readily answered if we assume the divine power. It was God’s desire. So, if science says, it was 15 Billion years ago, so be it. However, we cannot ask why 15 Billion; what happened that moment to make God decide to say, “Let there be light.” If there was an event, regardless of how disputed it may be, our existence must have a reason to be, if it is not God’s will. Quantum mechanics may even go as far as telling us that do not actually exist, it is all in our imagination, and while such arguments go above the boggle line of most, it nevertheless offers a proposition that too is subject to the “why” question. Both existence and lack of existence need an explanation why this duality was necessary? For thousands of years, we have tried answering the question in vain. However, all we needed to do was to re-examine our curiosity and inquisition instincts as the new species. Our understanding of where, how and when continues to go through major transformations as we develop new capabilities, if not the abilities to deconstruct our observations. We may be able to fully understand that quantum mechanics proves that there is no need for anything to exist without a beginning, but the fact that it exists keeps the question, “why” open. As science grows, our boggle line, a limit above which we do not understand, keeps getting lower. Where this line sits depends on our ability, not capability to understand. To most, the General Theory of Relativity is clearly above the boggle line, to some, why it takes 8 minutes for the sunshine to reach earth remains above the boggle line. Imagine being a frog in a well, looking outside from within. Can you see what is outside of the well? The answer is no. Try asking a bat, “what do you see,” and bat responding, “you mean what I hear?” These are called the dividing lines, not the boggle lines. We are an integral part of the universe in a matter and energy maze and therefore unable to go around this maze to see where we are. 

In quantum mechanics, the uncertainty principle, also known as Heisenberg's gives us an opportunity to answer the “why” question. According to the principle, if there are two physical properties, such as position and momentum of a particle, then you can only know only one property accurately, not both at the same time. The more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa. A bullet shot out of a rifle can be accurately recorded for its position in the air or its momentum (mass times velocity) but not both at the same time because if you need to hold the bullet at a point to know its position that will change its velocity and thus the momentum. However, the Heisenberg principle goes beyond that and works on a probability function that includes the wavelike function as well. If you find this confusing, think of this: by looking at the moon, you change the course of the moon! Now you get it? 

Many other examples defy human imagination. Think of present and past. We see things because the light reflected from them reaches the back of our eyes. Given that light travels at a speed of 186,000 miles per second, nevertheless, every quantum of light reaching our eyes take a finite time, and therefore, what we see is always a past event, for there is no present event that can be seen. If you divide any number by zero, the answer is infinity. Now no matter how large a number you label this to be, any definition would always be less than infinity. In other words, if you can find define infinity, it is not infinity. If you divide the circumference of a circle by its diameter, you end up with “pi” that has been calculated to several trillion decimal figures and never ends in its accuracy. Surprised? Moreover, if you want to appreciate the limitations of human mind, think of the famous paradoxes of philosophy.

Can you define what infinity is? For if you can, then it is not infinity.

Suppose someone tells you “I am lying.” If what she tells you is true, then she is lying, in which case what she tells you is false. On the other hand, if what she tells you is false, then she is not lying, in which case what she tells you is true. In short: if “I am lying” is true then it is false, and if it is false then it is true. 

Suppose that there is motion. Assume that Achilles and a tortoise are moving around a track in a footrace, in which the tortoise has been given a modest lead. Naturally, Achilles is running faster than the tortoise. If Achilles is at point A and the tortoise at point B, then to catch the tortoise Achilles will have to traverse the interval AB. However, at the time, it takes Achilles to arrive at point B; the tortoise will have moved on (however slowly) to point C. Then to catch the tortoise, Achilles will have to traverse the interval BC. However, at the time, it takes him to arrive at point C, the tortoise will have moved on to point D, and so on for an infinite number of intervals. It follows that Achilles can never catch the tortoise, which is absurd.

The riddles, paradoxes, and undefined realities surround us because of one reason: Our vocabulary. If we did not have a vocabulary, we will never the talking to ourselves and asking these questions, “Why are we here?” There canbe no dialog. Stephen Hawking said aptly,” What was God doing before He made the world? Was He preparing Hell for people who asked such questions?” So, we come to a crossroad where we must question whether our minds have evolved enough to frame a question correctly. Is the question, “Why are we here,” a question or a statement? No answer can satisfy every curious mind. Those unable to question their ability to question inevitably fall to accepting a Creator and close the issue, those who have evolved enough not to have at least some basic wisdom would say: this is not a frame-able question.

Amorphous Scalia And The New USA

The death of Justice Antonin Scalia in 2016 reminded me of his vote in 1986 for the Louisiana law that forbade public schools to teach evolution without also instructing students also on “creative science.” Then the Chief Justice William Rehnquist joined Scalia, who was heavily criticized by the academicians, as Jay Gould of Harvard said, “I regret to say that Justice Scalia does not understand the subject matter of evolutionary biology.” Scalia was open about the reasons for his decision. He said, “what we can really know for sure?”, criticizing his colleagues’ decision for treating the evidence for evolution as “conclusive.” To Scalia, ‘creative science’ was indeed a science as well. It was a constitutional issue of church and state to other justices, but to Scalia, it was a matter of belief, not necessarily in creation but the proof against creation science. In the 17th century, Galileo was ordered to turn himself into the Holy Office to begin trial for holding the belief that the Earth revolves around the Sun, which was deemed heretical by the Catholic Church. In 1616, exactly 300 years ago, the Inquisition found heliocentrism to be formally heretical, and heliocentric books were banned, and Galileo was ordered to refrain from holding, teaching or defending heliocentric ideas. More than 350 years after the Roman Catholic Church condemned Galileo, Pope John Paul II rectified this Church's most infamous wrongs. Three centuries from today, we will look back and find a member of the Supreme Court just as primitive as was the Church six hundred years ago. But with a caveat.

The judiciary faces judging cases based on merit and beyond a reasonable doubt. This task becomes onerous when the judgment involves amorphous considerations. There is no sharp line in judging when a decision resides on values, morality, and more particularly on the complexity of the issue that is beyond the capability the of a judge to understand. Scalia admitted, he does not get the science and has moral issues with his beliefs regarding Creation. What folks three hundred years from now will find is that there was a drastic change in the US values as the judges rendered a plurality of decisions that on amorphous grounds that changed our society for all time to come. It was too late.