Loading AI tools
Computer bugs related to the year 2000 From Wikipedia, the free encyclopedia
The term year 2000 problem,[1] or simply Y2K, refers to potential computer errors related to the formatting and storage of calendar data for dates in and after the year 2000. Many programs represented four-digit years with only the final two digits, making the year 2000 indistinguishable from 1900. Computer systems' inability to distinguish dates correctly had the potential to bring down worldwide infrastructures for computer reliant industries.
In the years leading up to the turn of the millennium, the public gradually became aware of the "Y2K scare", and individual companies predicted the global damage caused by the bug would require anything between $400 million and $600 billion to rectify.[2] A lack of clarity regarding the potential dangers of the bug led some to stock up on food, water, and firearms, purchase backup generators, and withdraw large sums of money in anticipation of a computer-induced apocalypse.[3]
Contrary to published expectations, few major errors occurred in 2000. Supporters of the Y2K remediation effort argued that this was primarily due to the pre-emptive action of many computer programmers and information technology experts. Companies and organizations in some countries, but not all, had checked, fixed, and upgraded their computer systems to address the problem.[4][5] Then-U.S. president Bill Clinton, who organized efforts to minimize the damage in the United States, labeled Y2K as "the first challenge of the 21st century successfully met",[6] and retrospectives on the event typically commend the programmers who worked to avert the anticipated disaster.
Critics argued that even in countries where very little had been done to fix software, problems were minimal. The same was true in sectors such as schools and small businesses where compliance with Y2K policies was patchy at best.
Y2K is a numeronym and was the common abbreviation for the year 2000 software problem. The abbreviation combines the letter Y for "year", the number 2 and a capitalized version of k for the SI unit prefix kilo meaning 1000; hence, 2K signifies 2000. It was also named the "millennium bug" because it was associated with the popular (rather than literal) rollover of the millennium, even though most of the problems could have occurred at the end of any century.
Computerworld's 1993 three-page "Doomsday 2000" article by Peter de Jager was called "the information-age equivalent of the midnight ride of Paul Revere" by The New York Times.[7][8][9]
The problem was the subject of the early book Computers in Crisis by Jerome and Marilyn Murray (Petrocelli, 1984; reissued by McGraw-Hill under the title The Year 2000 Computing Crisis in 1996). Its first recorded mention on a Usenet newsgroup is from 18 January 1985 by Spencer Bolles.[10]
The acronym Y2K has been attributed to Massachusetts programmer David Eddy[11] in an e-mail sent on 12 June 1995. He later said, "People were calling it CDC (Century Date Change), FADL (Faulty Date Logic). There were other contenders. Y2K just came off my fingertips."[12]
The problem started because on both mainframe computers and later personal computers, memory was expensive, from as low as $10 per kilobyte, to more than US$100 per kilobyte in 1975.[13][14] It was therefore very important for programmers to minimize usage. Since computers only gained wide usage in the 20th century, programs could simply prefix "19" to the year of a date, allowing them to only store the last two digits of the year instead of four. As space on disc and tape storage was also expensive, these strategies saved money by reducing the size of stored data files and databases in exchange for becoming unusable past the year 2000.[15]
This meant that programs facing two-digit years could not distinguish between dates in 1900 and 2000. Dire warnings at times were in the mode of:
The Y2K problem is the electronic equivalent of the El Niño and there will be nasty surprises around the globe.
- — John Hamre, United States Deputy Secretary of Defense[16]
Options on the De Jager Year 2000 Index, "the first index enabling investors to manage risk associated with the ... computer problem linked to the year 2000" began trading mid-March 1997.[17]
Special committees were set up by governments to monitor remedial work and contingency planning, particularly by crucial infrastructures such as telecommunications, utilities and the like, to ensure that the most critical services had fixed their own problems and were prepared for problems with others. While some commentators and experts argued that the coverage of the problem largely amounted to scaremongering,[18] it was only the safe passing of the main event itself, 1 January 2000, that fully quelled public fears.[citation needed]
Some experts who argued that scaremongering was occurring, such as Ross Anderson, professor of security engineering at the University of Cambridge Computer Laboratory, have since claimed that despite sending out hundreds of press releases about research results suggesting that the problem was not likely to be as big as some had suggested, they were largely ignored by the media.[18] In a similar vein, the Microsoft Press book Running Office 2000 Professional, published in May 1999, accurately predicted that most personal computer hardware and software would be unaffected by the year 2000 problem.[19] Authors Michael Halvorson and Michael Young characterized most of the worries as popular hysteria, an opinion echoed by Microsoft Corp.[20]
The practice of using two-digit dates for convenience predates computers, but was never a problem until stored dates were used in calculations.
I'm one of the culprits who created this problem. I used to write those programs back in the 1960s and 1970s, and was proud of the fact that I was able to squeeze a few elements of space out of my program by not having to put a 19 before the year. Back then, it was very important. We used to spend a lot of time running through various mathematical exercises before we started to write our programs so that they could be very clearly delimited with respect to space and the use of capacity. It never entered our minds that those programs would have lasted for more than a few years. As a consequence, they are very poorly documented. If I were to go back and look at some of the programs I wrote 30 years ago, I would have one terribly difficult time working my way through step-by-step.
—Alan Greenspan, 1998[21]
Business data processing was done using unit record equipment and punched cards, most commonly the 80-column variety employed by IBM, which dominated the industry. Many tricks were used to squeeze needed data into fixed-field 80-character records. Saving two digits for every date field was significant in this effort.
In the 1960s, computer memory and mass storage were scarce and expensive. Early core memory cost one dollar per bit. Popular commercial computers, such as the IBM 1401, shipped with as little as 2 kilobytes of memory.[lower-alpha 1] Programs often mimicked card processing techniques. Commercial programming languages of the time, such as COBOL and RPG, processed numbers in their character representations. Over time, the punched cards were converted to magnetic tape and then disc files, but the structure of the data usually changed very little.
Data was still input using punched cards until the mid-1970s. Machine architectures, programming languages and application designs were evolving rapidly. Neither managers nor programmers of that time expected their programs to remain in use for many decades, and the possibility that these programs would both remain in use and cause problems when interacting with databases – a new type of program with different characteristics – went largely uncommented upon.
The first person known to publicly address this issue was Bob Bemer, who had noticed it in 1958 as a result of work on genealogical software. He spent the next twenty years fruitlessly trying to raise awareness of the problem with programmers, IBM, the government of the United States and the International Organization for Standardization. This included the recommendation that the COBOL picture clause should be used to specify four digit years for dates.[23]
In the 1980s, the brokerage industry began to address this issue, mostly because of bonds with maturity dates beyond the year 2000. By 1987 the New York Stock Exchange had reportedly spent over $20 million on Y2K, including hiring 100 programmers.[24]
Despite magazine articles on the subject from 1970 onward, the majority of programmers and managers only started recognizing Y2K as a looming problem in the mid-1990s, but even then, inertia and complacency caused it to be mostly unresolved until the last few years of the decade. In 1989, Erik Naggum was instrumental in ensuring that internet mail used four digit representations of years by including a strong recommendation to this effect in the internet host requirements document RFC 1123.[25] On April Fools' Day 1998, some companies set their mainframe computer dates to 2001, so that "the wrong date will be perceived as good fun instead of bad computing" while having a full day of testing.[26]
While using 3-digit years and 3-digit dates within that year was used by some, others chose to use the number of days since a fixed date, such as 1 January 1900.[27] Inaction was not an option, and risked major failure. Embedded systems with similar date logic were expected to malfunction and cause utilities and other crucial infrastructure to fail.
Saving space on stored dates persisted into the Unix era, with most systems representing dates to a single 32-bit word, typically representing dates as elapsed seconds from some fixed date, which causes the similar Y2K38 problem.[citation needed]
Storage of a combined date and time within a fixed binary field is often considered a solution, but the possibility for software to misinterpret dates remains because such date and time representations must be relative to some known origin. Rollover of such systems is still a problem but can happen at varying dates and can fail in various ways. For example:
The date of 4 January 1975 overflowed the 12-bit field that had been used in the Decsystem 10 operating systems. There were numerous problems and crashes related to this bug while an alternative format was developed.[33]
Even before 1 January 2000 arrived, there were also some worries about 9 September 1999 (albeit less than those generated by Y2K). Because this date could also be written in the numeric format 9/9/99, it could have conflicted with the date value 9999
, frequently used to specify an unknown date. It was thus possible that database programs might act on the records containing unknown dates on that day. Data entry operators commonly entered 9999 into required fields for an unknown future date, (e.g., a termination date for cable television or telephone service), in order to process computer forms using CICS software.[34] Somewhat similar to this is the end-of-file code 9999
, used in older programming languages. While fears arose that some programs might unexpectedly terminate on that date, the bug was more likely to confuse computer operators than machines.
Normally, a year is a leap year if it is evenly divisible by four. A year divisible by 100 is not a leap year in the Gregorian calendar unless it is also divisible by 400. For example, 1600 was a leap year, but 1700, 1800 and 1900 were not. Some programs may have relied on the oversimplified rule that "a year divisible by four is a leap year". This method works fine for the year 2000 (because it is a leap year), and will not become a problem until 2100, when older legacy programs will likely have long since been replaced. Other programs contained incorrect leap year logic, assuming for instance that no year divisible by 100 could be a leap year. An assessment of this leap year problem including a number of real-life code fragments appeared in 1998.[35] For information on why century years are treated differently, see Gregorian calendar.
Some systems had problems once the year rolled over to 2010. This was dubbed by some in the media as the "Y2K+10" or "Y2.01K" problem.[36]
The main source of problems was confusion between hexadecimal number encoding and binary-coded decimal encodings of numbers. Both hexadecimal and BCD encode the numbers 0–9 as 0x0–0x9. BCD encodes the number 10 as 0x10, while hexadecimal encodes the number 10 as 0x0A; 0x10 interpreted as a hexadecimal encoding represents the number 16.
For example, because the SMS protocol uses BCD for dates, some mobile phone software incorrectly reported dates of SMSes as 2016 instead of 2010. Windows Mobile is the first software reported to have been affected by this glitch; in some cases WM6 changes the date of any incoming SMS message sent after 1 January 2010 from the year 2010 to 2016.[37][38]
Other systems affected include EFTPOS terminals,[39] and the PlayStation 3 (except the Slim model).[40]
The most important occurrences of such a glitch were in Germany, where up to 20 million bank cards became unusable, and with Citibank Belgium, whose Digipass customer identification chips failed.[41]
When the year 2022 began, many systems using 32-bit integers encountered problems, which are now collectively known as the Y2K22 bug. The maximum value of a signed 32-bit integer, as used in many computer systems, is 2147483647. Systems using an integer to represent a 10 character date-based field, where the leftmost two characters are the 2-digit year, ran into an issue on 1 January 2022 when the leftmost characters needed to be '22', i.e. values from 2200000001 needed to be represented.
Microsoft Exchange Server was one of the more significant systems affected by the Y2K22 bug. The problem caused emails to be stuck on transport queues on Exchange Server 2016 and Exchange Server 2019, reporting the following error:
The FIP-FS "Microsoft" Scan Engine failed to load. PID: 23092, Error Code: 0x80004005. Error Description: Can't convert "2201010001" to long.
[42]
Many systems use Unix time and store it in a signed 32-bit integer. This data type is only capable of representing integers between −(231) and (231)−1, treated as number of seconds since the epoch at 1 January 1970 at 00:00:00 UTC. These systems can only represent times between 13 December 1901 at 20:45:52 UTC and 19 January 2038 at 03:14:07 UTC. If these systems are not updated and fixed, then dates all across the world that rely on Unix time will wrongfully display the year as 1901 beginning at 03:14:08 UTC on 19 January 2038.[citation needed]
Several very different approaches were used to solve the year 2000 problem in legacy systems.
Problems that occurred on 1 January 2000 were generally regarded as minor.[62] Consequences did not always result exactly at midnight. Some programs were not active at that moment and problems would only show up when they were invoked. Not all problems recorded were directly linked to Y2K programming in a causality; minor technological glitches occur on a regular basis.
Reported problems include:
Problems were reported on 29 February 2000, Y2K's first Leap Year Day, and 1 March 2000. These were mostly minor.[95][96][97]
Some software did not correctly recognize 2000 as a leap year, and so worked on the basis of the year having 365 days. On the last day of 2000 (day 366) and first day of 2001 these systems exhibited various errors. Some computers also treated the new year 2001 as 1901, causing errors. These were generally minor.
Since 2000, various issues have occurred due to errors involving overflows. An issue with time tagging caused the destruction of the NASA Deep Impact spacecraft.[106]
Some software used a process called date windowing to fix the issue by interpreting years 00–19 as 2000–2019 and 20–99 as 1920–1999. As a result, a new wave of problems started appearing in 2020, including parking meters in New York City refusing to accept credit cards, issues with Novitus point of sale units, and some utility companies printing bills listing the year 1920. The video game WWE 2K20 also began crashing when the year rolled over, although a patch was distributed later that day.[107]
Although the Bulgarian national identification number allocates only two digits for the birth year, the year 1900 problem and subsequently the Y2K problem were addressed by the use of unused values above 12 in the month range. For all persons born before 1900, the month is stored as the calendar month plus 20, and for all persons born in or after 2000, the month is stored as the calendar month plus 40.[108]
Canadian Prime Minister Jean Chrétien's most important cabinet ministers were ordered to remain in the capital Ottawa, and gathered at 24 Sussex Drive, the prime minister's residence, to watch the clock.[7] 13,000 Canadian troops were also put on standby.[7]
The Dutch Government promoted Y2K Information Sharing and Analysis Centers (ISACs) to share readiness between industries, without threat of antitrust violations or liability based on information shared.[citation needed]
Norway and Finland changed their national identification numbers to indicate a person's century of birth. In both countries, the birth year was historically indicated by two digits only. This numbering system had already given rise to a similar problem, the "Year 1900 problem", which arose due to problems distinguishing between people born in the 19th and 20th centuries. Y2K fears drew attention to an older issue, while prompting a solution to a new problem. In Finland, the problem was solved by replacing the hyphen ("-") in the number with the letter "A" for people born in the 21st century (for people born before 1900, the sign was already "+").[109] In Norway, the range of the individual numbers following the birth date was altered from 0–499 to 500–999.[citation needed]
Romania also changed its national identification number in response to the Y2K problem, due to the birth year being represented by only two digits. Before 2000, the first digit, which shows the person's sex, was 1 for males and 2 for females. Individuals born since 1 January 2000 have a number starting with 5 if male or 6 if female.[citation needed]
The Ugandan government responded to the Y2K threat by setting up a Y2K Task Force.[110] In August 1999 an independent international assessment by the World Bank International Y2k Cooperation Centre found that Uganda's website was in the top category as "highly informative". This put Uganda in the "top 20" out of 107 national governments, and on a par with the United States, United Kingdom, Canada, Australia and Japan, and ahead of Germany, Italy, Austria, Switzerland which were rated as only "somewhat informative". The report said that "Countries which disclose more Y2K information will be more likely to maintain public confidence in their own countries and in the international markets."[111]
In 1998, the United States government responded to the Y2K threat by passing the Year 2000 Information and Readiness Disclosure Act, by working with private sector counterparts in order to ensure readiness, and by creating internal continuity of operations plans in the event of problems and set limits to certain potential liabilities of companies with respect to disclosures about their year 2000 programs.[112][113] The effort was coordinated by the President's Council on Year 2000 Conversion, headed by John Koskinen, in coordination with the Federal Emergency Management Agency (FEMA), and an interim Critical Infrastructure Protection Group within the Department of Justice.[114][115]
The US government followed a three-part approach to the problem: (1) outreach and advocacy, (2) monitoring and assessment, and (3) contingency planning and regulation.[116]
A feature of US government outreach was Y2K websites, including y2k.gov, many of which have become inaccessible in the years since 2000. Some of these websites have been archived by the National Archives and Records Administration or the Wayback Machine.[117][118]
Each federal agency had its own Y2K task force which worked with its private sector counterparts; for example, the FCC had the FCC Year 2000 Task Force.[116][119]
Most industries had contingency plans that relied upon the internet for backup communications. As no federal agency had clear authority with regard to the internet at this time (it had passed from the Department of Defense to the National Science Foundation and then to the Department of Commerce), no agency was assessing the readiness of the internet itself. Therefore, on 30 July 1999, the White House held the White House Internet Y2K Roundtable.[120]
The U.S. government also established the Center for Year 2000 Strategic Stability as a joint operation with the Russian Federation. It was a liaison operation designed to mitigate the possibility of false positive readings in each nation's nuclear attack early warning systems.[121]
The International Y2K Cooperation Center (IY2KCC) was established at the behest of national Y2K coordinators from over 120 countries when they met at the First Global Meeting of National Y2K Coordinators at the United Nations in December 1998.[122] IY2KCC established an office in Washington, D.C., in March 1999. Funding was provided by the World Bank, and Bruce W. McConnell was appointed as director.
IY2KCC's mission was to "promote increased strategic cooperation and action among governments, peoples, and the private sector to minimize adverse Y2K effects on the global society and economy." Activities of IY2KCC were conducted in six areas:
IY2KCC closed down in March 2000.[122]
The Y2K issue was a major topic of discussion in the late 1990s and as such showed up in most popular media. A number of "Y2K disaster" books were published such as Deadline Y2K by Mark Joseph. Movies such as Y2K: Year to Kill capitalized on the currency of Y2K, as did numerous TV shows, comic strips, and computer games.
A variety of fringe groups and individuals such as those within some fundamentalist religious organizations, survivalists, cults, anti-social movements, self-sufficiency enthusiasts and those attracted to conspiracy theories, called attention to Y2K fears and claimed that they provided evidence for their respective theories. End-of-the-world scenarios and apocalyptic themes were common in their communication.
Interest in the survivalist movement peaked in 1999 in its second wave for that decade, triggered by Y2K fears. In the time before extensive efforts were made to rewrite computer programming codes to mitigate the possible impacts, some writers such as Gary North, Ed Yourdon, James Howard Kunstler,[126] and Ed Yardeni anticipated widespread power outages, food and gasoline shortages, and other emergencies. North and others raised the alarm because they thought Y2K code fixes were not being made quickly enough. While a range of authors responded to this wave of concern, two of the most survival-focused texts to emerge were Boston on Y2K (1998) by Kenneth W. Royce and Mike Oehler's The Hippy Survival Guide to Y2K.
Y2K also appeared in the communication of some fundamentalist and charismatic Christian leaders throughout the Western world, particularly in North America and Australia. Their promotion of the perceived risks of Y2K was combined with end times thinking and apocalyptic prophecies, allegedly in an attempt to influence followers.[127] The New York Times reported in late 1999, "The Rev. Jerry Falwell suggested that Y2K would be the confirmation of Christian prophecy – God's instrument to shake this nation, to humble this nation. The Y2K crisis might incite a worldwide revival that would lead to the rapture of the church. Along with many survivalists, Mr. Falwell advised stocking up on food and guns".[128] Adherents in these movements were encouraged to engage in food hoarding, take lessons in self-sufficiency, and the more extreme elements planned for a total collapse of modern society. The Chicago Tribune reported that some large fundamentalist churches, motivated by Y2K, were the sites for flea market-like sales of paraphernalia designed to help people survive a social order crisis ranging from gold coins to wood-burning stoves.[129] Betsy Hart wrote in the Deseret News that many of the more extreme evangelicals used Y2K to promote a political agenda in which the downfall of the government was a desired outcome in order to usher in Christ's reign. She also said, "the cold truth is that preaching chaos is profitable and calm doesn't sell many tapes or books".[130] Y2K fears were described dramatically by New Zealand-based Christian prophetic author and preacher Barry Smith in his publication "I Spy with my Little Eye," where he dedicated an entire chapter to Y2K.[131] Some expected, at times through so-called prophecies, that Y2K would be the beginning of a worldwide Christian revival.[132]
In the aftermath, it became clear that leaders of these fringe groups and churches, had manufactured fears of apocalyptic outcomes to manipulate their followers into dramatic scenes of mass repentance or renewed commitment to their groups, as well as urging additional giving of funds. The Baltimore Sun claimed this in their article "Apocalypse Now – Y2K spurs fears," noting the increased call for repentance in the populace in order to avoid God's wrath.[133] Christian leader Col Stringer in his commentary published, "Fear-creating writers sold over 45 million books citing every conceivable catastrophe from civil war, planes dropping from the sky to the end of the civilized world as we know it. Reputable preachers were advocating food storage and a "head for the caves" mentality. No banks failed, no planes crashed, no wars or civil war started. And yet not one of these prophets of doom has ever apologized for their scare-mongering tactics."[132] Critics argue that some prominent North American Christian ministries and leaders generated huge personal and corporate profits through sales of Y2K preparation kits, generators, survival guides, published prophecies and a wide range of other associated merchandise, such as Christian journalist Rob Boston in his article "False Prophets, Real Profits."[127] However, Pat Robertson, founder of the global Christian Broadcasting Network, gave equal time to pessimists and optimists alike and granted that people should at least expect "serious disruptions".[134]
The total cost of the work done in preparation for Y2K likely surpassed US$300 billion ($531 billion as of January 2018, once inflation is taken into account).[135][136] IDC calculated that the US spent an estimated $134 billion ($237 billion) preparing for Y2K, and another $13 billion ($23 billion) fixing problems in 2000 and 2001. Worldwide, $308 billion ($545 billion) was estimated to have been spent on Y2K remediation.[137]
Remedial work was driven by customer demand for solutions.[138] Software suppliers, mindful of their potential legal liability,[123] responded with remedial effort. Software subcontractors were required to certify that their software components were free of date-related problems, which drove further work down the supply chain.
By 1999, many corporations required their suppliers to certify that their software was all Y2K-compliant. Some signed after accepting merely remedial updates. Many businesses or even whole countries suffered only minor problems despite spending little effort themselves.[citation needed]
There are two ways to view the events of 2000 from the perspective of its aftermath:
This view holds that the vast majority of problems were fixed correctly, and the money spent was at least partially justified. The situation was essentially one of preemptive alarm. Those who hold this view claim that the lack of problems at the date change reflects the completeness of the project, and that many computer applications would not have continued to function into the 21st century without correction or remediation.
Expected problems that were not seen by small businesses and small organizations were prevented by Y2K fixes embedded in routine updates to operating system and utility software[139] that were applied several years before 31 December 1999.
The extent to which larger industry and government fixes averted issues that would have more significant impacts had they not been fixed, were typically not disclosed or widely reported.[140][unreliable source?]
It has been suggested that on 11 September 2001, infrastructure in New York City (including subways, phone service, and financial transactions) was able to continue operation because of the redundant networks established in the event of Y2K bug impact[141] and the contingency plans devised by companies.[142] The terrorist attacks and the following prolonged blackout to lower Manhattan had minimal effect on global banking systems.[143] Backup systems were activated at various locations around the region, many of which had been established to deal with a possible complete failure of networks in Manhattan's Financial District on 31 December 1999.[144]
The contrary view asserts that there were no, or very few, critical problems to begin with. This view also asserts that there would have been only a few minor mistakes and that a "fix on failure" approach would have been the most efficient and cost-effective way to solve these problems as they occurred.
International Data Corporation estimated that the US might have wasted $40 billion.[145]
Skeptics of the need for a massive effort pointed to the absence of Y2K-related problems occurring before 1 January 2000, even though the 2000 financial year commenced in 1999 in many jurisdictions, and a wide range of forward-looking calculations involved dates in 2000 and later years. Estimates undertaken in the leadup to 2000 suggested that around 25% of all problems should have occurred before 2000.[146] Critics of large-scale remediation argued during 1999 that the absence of significant reported problems in non-compliant small firms was evidence that there had been, and would be, no serious problems needing to be fixed in any firm, and that the scale of the problem had therefore been severely overestimated.[147]
Countries such as South Korea, Italy, and Russia invested little to nothing in Y2K remediation,[128][145] yet had the same negligible Y2K problems as countries that spent enormous sums of money. Western countries anticipated such severe problems in Russia that many issued travel advisories and evacuated non-essential staff.[148]
Critics also cite the lack of Y2K-related problems in schools, many of which undertook little or no remediation effort. By 1 September 1999, only 28% of US schools had achieved compliance for mission critical systems, and a government report predicted that "Y2K failures could very well plague the computers used by schools to manage payrolls, student records, online curricula, and building safety systems".[149]
Similarly, there were few Y2K-related problems in an estimated 1.5 million small businesses that undertook no remediation effort. On 3 January 2000 (the first weekday of the year), the Small Business Administration received an estimated 40 calls from businesses with computer issues, similar to the average. None of the problems were critical.[150]
The 2024 CrowdStrike incident, a global IT system outage, was compared to the Y2K bug by several news outlets, recalling fears surrounding it due to its scale and impact.[151][152]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.