As jewish desperation mounts regarding how many people in the United States are finally waking up to their machinations, jewgle’s attempt to mind control the populace and control the discourse is escalating to ridiculous heights. This is a great article.
Source Article by Sayer Ji: GOOGLE: "Organic is a Lie, Supplements are Dangerous, Chiropractic is Fake," and Other Thoughts They Want You To Think https://www.greenmedinfo.com/blog/google-organic-lie-supplements-are-dangerous-chiropractic-fake-and-other-thoughts
“ORGANIC IS A… LIE, SHAM, MYTH, WASTE OF MONEY, MARKETING GIMMICK”
Recently, a shocking discovery was made: Google is autocompleting the search fields of billions of users with false information (topics ranging from natural health to candidates for election), based not on objective search volume data, but an extremely biased political and socio-economic agenda — one that is jeopordizing the health and human rights of everyone on the planet.
On June 3rd, 2019, it was discovered that Google had scrubbed their search results clean of natural health sites, resulting in some losing as much as 99% of their traffic. Soon after, it was discovered that Google also manipulates users with their autocomplete function into thinking that natural approaches to health are fraudulent and even harmful. This is Part 2 of our ongoing series exposing these practices. Part 1 can be found here.
Google manipulates your search results in a very specific way. For instance, if you start your search out with “supplements are,” Google will autocomplete your search field with the following suggestions:
“SUPPLEMENTS ARE BAD, USELESS, NOT REGULATED, DANGEORUS, SCAMS”
Most Google users believe that its suggestions reflect the volume of searches others are doing on the topic — a reasonable assumption, given Google says their algorithm is “Based on several factors, like how often others have searched for a term.” In fact, Google goes out of their way to say they are not making subjective suggestions, but objective predictions based on real searches:
“Predictions, not suggestions
You’ll notice we call these autocomplete “predictions” rather than “suggestions,” and there’s a good reason for that. Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering.
How do we determine these predictions? We look at the real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches.” [italics and bold added] Source: Google
But Google Trends data show the “supplements are” autocomplete results above to be inaccurate, if not blatantly falsified. In fact, keyword search volume trend lines show that since 2004, searches for the phrase “supplements are bad” relative to “supplements are good” (in red) are far lower, and the gap continues to increase, with about 5x more people searching for them in a positive rather than negative light. This is the very definition of the Orwellian inversion: where Good becomes Bad, and War becomes Peace.
Amazing, a 3rd Google product for it’s extremely profitable Google Ads division called Keyword Planner, shows an even more accurate quantification of how many searches have actually been performed in the United States in the past month with the phrase: “supplements are bad.” The result? Only 100-1,000 searches, which is between only .2739 and 2.7 people a day.
That’s right, in the entire population of the United States (327,321,076 as of March, 26, 2018), at most 2.7 people type the phrase “supplements are bad” into the Google search engine. But if any of those 327 million people type “supplements are…” into the Google search engine, all 327 million users will have their search completed for them with the suggestion that they are “bad” and search for information on how bad they are.
In order to demonstrate that this result is not a fluke, let’s look at the search “taking vitamins…” and see what Google suggestions in their autocomplete.
Example #1: “TAKING VITAMINS IS A BAD”
And what does the Google Trend data show? A null result: “Hmm, your search doesn’t have enough data to show here.”
This should not be surprising considering that search engines field queries and not affirmative statements reflecting foregone conclusions. But that’s how thoroughly a very specific anti-nutritional industry political agenda is embedded within Google’s algorithm.
When we drop this phrase into Google’s keyword planner, what do we get? An astounding 0-10 people search this term every month in the U.S. In other words, no one.
We discussed the potential corrupting influence of pharmaceutical companies, with whom Google partners and receives investment, on their results in our previous article: INVESTIGATION: Google Manipulates Search Suggestions To Promote Pharma, Discredit Natural Health.
Alternative browsers like Duckduckgo, on the other hand, won’t suggest anything because it does not have an autocomplete function as google does, which google states: “is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed.
Our investigation has uncovered a number of examples like this where Google is placing autocomplete suggestions into the search user’s mind that are not only the opposite of what most people search for, but sometimes do not search for at all — indicating that Google’s ostensibly objective feature is literally a propaganda device programming users to think thoughts they would never otherwise consider.
This has profound implications, as we will explore later, as the so-called Search Engine Manipulation Effect (SEME) identified by researchers in 2013, is one of the most powerfully influential forces on human behavior ever discovered — so powerful, in fact, that it may have determined the outcome of one quarter of the world’s elections in recent years.
But first, let’s look at further examples of Google’s dystopian search results, such as:
Example #2: “GMOS ARE GOOD”
Google trends data for “gmos are good” v. “gmos are bad”: gmos are bad wins.
Example #3: “ORGANIC IS A LIE”
Google trends data for “organic is a lie”: null finding.
Example #4: “HOMEOPATHY IS FAKE..”
Google trends data for “homeopathy is fake”: null finding.
Example #5: “CHIROPRACTIC IS FAKE..”
Google trends data for “chiropractic is fake” versus “chiropractic is real”: real wins.
Example #6: “NATUROPATHY IS FAKE..”
Google trends data for “naturopathy is fake”: null finding.
What’s really going on here?
One might argue that the examples shown above are benign, and may even reflect a twisted sense of humor. After all, wasn’t Google’s original tongue-in-cheek motto “Don’t be evil”? And how seriously do we take a company whose name, after all, is as silly as Google? It turns out, however, that the sort of manipulations revealed here actually have extremely powerful effects on human thinking and behavior — far beyond what most can even imagine.
The true extent to which Google’s search algorithm affects human society today was first revealed by research psychologist Robert Epstein, and his associate Ronald E. Robertson, who discovered the search engine manipulation effect (SEME) in 2013— the largest human behavioral effect ever identified. In fact, their radomanized, clinical trial based research revealed that Google’s “instant” search tool, which “autocompletes” a users sentences, may be so extraordinarily powerful as to have determined the outcomes of a quarter of the world’s elections in recent years.
Stanford Seminar – The Search Engine Manipulation Effect (SEME) and Its Unparalleled Power by Robert Epstein.
Their 2015 paper titled, “The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections,” published in the Proceedings of the National Academy of Sciences, is well worth reading. If found that within certain voter subpopulations, such as undecided Republications, the SEME was so powerful that it determined up to 80% of the votes.
Weaponized: How The Search Suggestion Effect (SSE) Gave Google Orwellian Power
When someone searches Google — an act so common that it was added as a legitimate transitive verb to the Oxford English Dictionary and the 11th edition of the Merriam-Webster Collegiate Dictionary 2006 — they are often at their most uncertain and vulnerable moment, which is why they have a question and deferring to google for an answer. In fact, every second there are 63,000 Google searches performed around the world, which translates into 228 million searches per hour, and 2 trillion searches per year. The majority of these searches will present an autocomplete suggestion, effectively completing the users thoughts for them.
Most searchers assume the results Google will present are somehow objective, and credible sources of data, because of the perceived power and/or omniscience of their algorithms. This is why Google’s “autocomplete” feature is so powerful. And why, if it is not an accurate prediction of what the thinking is looking for, but the opposite, it can profoundly influence a person’s thinking and subsequent favor. It is fundamentally the trust one puts in Google that it does not have its own agenda, which gives it its immense power and draw.
This is why, a recent undercover investigation by Project Veritas is so concerning. James O’ Keefe interviewed a top Google executive who admitted that Google adjusted their algorithms to manipulate elecctions. You can watch the video below:
Where do we go from here?
The research on Google’s manipulation of search results has just begun, and there are other topics to be explored. For instance, we addressed Google’s attempt to discredit vaccine safety and health freedom advocates by further amplifying the dehumanizing effects of the socially engineered slur “anti-vaxxer” as follows:
Yet Google Trends shows that this is not a search that the public makes, globally, nor in the United States.