meniu

logo <!-- Image Map Generated  --> <img src="redu logo.jpg" usemap="#image-map">  <map name="image-map">     <area target="" alt="" title="" href="" coords="" shape="rect"> </map>
Nicolae Ceausescu opposition to Moscow
24 noiembrie 2022

Search engine

Search engine

  1990: the birth of search engines

The first network search engines for the web in December 1990: WHOIS user searches date from 1982 [5] and multi-network user searches by Know service robots first appeared in 1989. [6 ] The The first well-documented search engine for retrieving chemical filters, namely PHOTO files, was Archie, which premiered on September 10, 1990.[7]

In September 1993, the World Wide Web was indexed entirely manually. This was a list of servers hosted by CERN web servers, edited by Tim Berners-Lee. A snapshot of the 1992 catalog still exists[8], but as more and more web servers came online, the central catalog couldn't keep up. The new servers were announced on NC SA's website under the heading "What's New!" reported.[9]

The introduction of Gopher (founded by Mark McCahill of the University of Minnesota in 1991) led to two new research programs, Veronica and Jughead. Like Archie, you look up the names and addresses of files stored in Gopher's taxonomy. Veronica (Very Easy Rodent-oriented Net-wide Index to Computerized Archives) provided search terms for most of the Gopher menu headings in the Gopher catalog. Jughead (Jones In Universal Gopher Hierarchy Excavation And Display) was a tool menu for retrieving data from specific Gopher servers. Although the name of the search engine "Archie Search Engine '' did not refer to the Archie comics, Veronica and Jughead are characters from the series and therefore refer to the predecessor.

In the summer of 1993 there was no search engine on the Internet, although many different catalogs were managed manually. Oscar Nuestras of the University of Geneva wrote a series of Perl scripts that routinely play these pages and wrote them in a standard format. It was the basis for the W3 Catalog, the first primitive web search engine, released on September 2, 1993. [15]

In June 1993, Matthew Gray, then at MIT, built what is believed to be the first Perl-based web crawler. World Wide Web Wanderer and used it to create a folder called "Wanda". Walkers was designed to measure the size of the World Wide Web, which he did until late 1995. Another web search engine, Ali Web, appeared in November 1993. Ali Web did not use web crawlers. Elected, but dependent on the site's notification instead. Maintains a record of the file's existence in a format specific to each site.

Jump Station (created by Jonathan Fletcher in December 1993[16]) used web crawling to find and index web pages and used a web form to communicate with the search engine. Thus, it was the first web resource locating tool that combined the three essential functions of a web search engine (browsing, indexing, and searching) as described below. Sorting, and therefore searching, was limited to addresses and URLs found by the spider due to the limited resources available on the platform it was running on.

One of the first "full-text" search engines was Web Crawler, released in 1994. Unlike its predecessors, it allowed users to search for any word on any webpage, which has since become the standard for all major search engines. It was also a well-known search engine. Also in 1994, Lycos (founded at Carnegie Mellon University) started and grew into a large trading company.

The first popular TV search engine was Yahoo! research.[17] Yahoo! It was founded in January 1994 by Jerry Yang and David Filo. was an Internet directory called Yahoo! Phone book. A search function was added in 1995, allowing users to use Yahoo! List.[18][19] It was one of the most popular ways to find interesting websites, but the search method worked using a web directory instead of full-text copies of websites.

Soon after, many search engines appeared, vying for popularity. The result was Magellan, Excite, Infoseek, Inktomi, Northern Light and AltaVista. People looking for information can also search the directory instead of using search terms.

In 1996, Robin Li developed the RanDex website ranking algorithm to rank search engine results pages [20][21][22] and received a US patent for the technology.[23] It was the first search engine to use hyperlink indexing to measure website quality [24] before Google filed a highly functional patent on the algorithm two years later in 1998. [25] Larry Page cited Li's work in some of his books. US Patent for Pagerank.[26] Li then applied his Rantex technology to the Baidu search engine, which he founded in China and launched in 2000.

In 1996, Netscape wanted to offer a search engine a unique offering, the search function of the Netscape browser. Interest was so great that Netscape instead contracted with five major search engines: for $5 million a year, each search engine would change Netscape's search engine on the site. The five search engines were Yahoo!, Magellan, Lycos, Infoseek and Excite

Google came up with the idea of ​​selling keywords in 1998 from the small search engine goto.com. This decision had a significant impact on the search engine industry, which went from being a struggle to becoming one of the most profitable businesses on the Internet. 

Search engines were also known as one of the brightest stars in the Internet investment craze that emerged in the late 1980s.[30] Several companies have entered the market spectacularly and achieved record profits from IPOs. Some have abandoned general search and specialist publications such as B.Northern Lighting. Many search engine companies fell into the dot-com bubble, a speculative market boom that began in March

2000

. The company outperformed many searches using an algorithm called PageRank, which was later explained in Anatomy of a Search Engine by Google founders Sergey Brin and Larry Page.[4] This iterative algorithm ranks websites based on the number and PageRank of other websites and pages that link to them, assuming that good or desirable websites are linked more often than others. Larry Page's PageRank patent cites Robin Li's earlier Rank Two patent as an influence.[26][22] Google has also kept the search engine's user interface to a minimum. On the other hand, many competitors have integrated search engine portals. In fact, Google's search engine has become so popular that fake search engines like Mystery Seeker have appeared.

In 2000, Yahoo! offered a search service based on the Inktomi search engine. Yahoo! acquired Ink Tom in 2002 and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! migrated to Google's search engine by 2004 when it launched its own search engine based on the combined technology of corporate acquisitions.

Microsoft first published MSN Search in Ink Tom's search results in the fall of 1998. In early 1999, the site began posting Look Smart entries between the Inktomi results. For a brief time in 1999, MSN searches used AltaVista results instead. In 2004, Microsoft began moving to its own search technology, powered by its own search engine (named msnbot).

Microsoft's rebranded Bing search engine was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft reached an agreement whereby Yahoo! The search was performed using Microsoft Bing technology.

As of 2019, active search engines include Google, Petal, Sogou, Baidu, Bing, Megablast, Moje, DuckDuckGo, and Yandex.

Search engines get their information by crawling from page to page. "Spider" checks the default name of the Robots.txt file assigned to it. The robots.txt file contains instructions for crawlers, telling them which pages to crawl and which not to. After checking the robots.txt file and finding it or not, the crawler returns certain information to the index based on information from multiple items, such as: B. Headers, page content, JavaScript, CSS style sheets, headers or description of their content. in the HTML content description fields. After a certain number of views, indexed data or a certain amount of time spent on the page, the spider stops crawling and continues on its way. "[No web spider can crawl the entire web. Instead, due to the large number of websites, wasp traps, spam, and other genuine web requests, web crawlers use a crawling strategy to determine when a website should be considered indexable. Some websites are carefully checked, while others

provide only partial means of matching words and other recognizable symbols on web pages with domain names and HTML fields.Orders are placed in a public database available for online orders, words or a phrase.The directory will help you to find information as quickly as possible.[32] Some indexing and caching techniques are trade secrets, but building a website is a simple process that requires you to systematically visit all pages

.ta (part or all of the content that goes to anticipation are required.) The person is stored in the memory of the search engine rt she requests. If the visit is delayed, the search engine can only act as a web auction instead. In this case, it may differ from the keyword listing.[32] A cached page retains the appearance of a version in which words were previously recorded, so a cached version can be useful for a page when the actual page no longer exists, but this issue is also considered soft binding.

Advanced architecture of a standard browser

When a user searches a search engine, it usually consists of several keywords.[34] The directory already contains the names of the websites that contain the keywords and they will be removed from the directory immediately. The actual processing load is to generate web pages that display a list of search results: each page in the entire list must be weighted with directory information.[32] Then, for the top element of the search results, you need to find, render, and annotate text fragments that show the context of the relevant keywords. This is only part of the processing of all search results pages, and the following pages (top right corner) require more of this post-processing.

In addition to simple search terms, search engines offer their own screen interfaces, graphical user interfaces or operators and search parameters to improve the search results. This provides the necessary controls for the user involved in the feedback generated by users through filtering and weighting as they improve search results towards the first few pages of search results. For example, since 2007, the search engine Google.com has enabled filtering by date by clicking "Show search tools" in the left column of the front page of the search results and then selecting the desired time.[35] You can also weight by date as each side has replacement times. Most search engines support the use of the logical operators AND, OR, and NOT to help students refine their search term. Logical operators are intended for literal searches and allow the user to refine and expand the search criteria. The search engine looks for words or phrases exactly as they are written. Some search engines offer an advanced feature called proximity search that allows users to specify the distance between keywords.[32] There are also subjective searches, which use statistical analysis to search through pages that contain the words or phrases you are looking for.

The usefulness of a search engine depends on the relevance of the results it returns. Although there can be millions of web pages that contain a specific word or phrase, some websites may be more relevant, popular, or credible than others. Most search engines use result counting methods to deliver the "best" results first. The way a search engine determines which pages return the best results and the order in which the results should be displayed varies widely from search engine to search engine.[32] Methods also change over time as Internet use changes and new technologies develop. Two main types of printers have been developed: The first is a system of predefined, hierarchical keywords, mostly programmed by humans. The second is a system that creates an inverted index by analyzing the found texts. Rather, this first form relies on the computer itself to do most of the work.

Most search engines are commercial companies that are financed by advertising revenue. Some of them allow advertisers to pay for their ads to rank higher in search results. Search engines that don't make money from their search results make money by showing search-related ads on top of the regular search results. Search engines make money every time someone clicks on one of these ads.[36]

Prospecting

Local prospecting is a process that maximizes the efforts of local businesses. We focus on changes to ensure consistency of addiction. This is important because many people make decisions about whether to go and what to buy based on their research.[37]

Market

As of January 2022, Google is by far the most used search engine in the world with a market share of 92.01 percent. Bing, Yahoo!, Baidu, Yandex and DuckDuckGo are the other most used search engines in the world. [38]

Russia and East Asia In

Russia, Yandex's market share is 61.9% compared to Google's 28.3%[39]. In China, Baidu is the most popular search engine.[40] Kaveri, South Korea's national search portal, is used by 70% of the country's websites.[41] Yahoo! Japan and Yahoo! Taiwan is the most popular way to surf the internet in Japan or Taiwan.[42] China is one of the few countries where Google is not among the top 3 search engines by market share. Google was once the number one search engine in China but withdrew after reaching an agreement with the government on censorship and knowledge of the attack.[43]

Europe

The market in most EU countries is dominated by Google, except in the Czech Republic where Seznam is a strong competitor.[44]

Search engine Qwant is headquartered in Paris, France, where it has most of its 50 million monthly registered users.

Bias

Although search engines are programmed to rank websites based on a combination of their popularity and relevance, empirical evidence points to various political, economic, and social biases in the information they provide.[45][46] and underlying assumptions about technology. [47] These biases can stem directly from economic and business processes (e.g. companies that advertise on a search engine can also gain popularity in their free search results) and political processes (e.g. removing search results to comply with local laws) [ 48 ] For example, "Do not Google" published specific Nazi sites in France and Germany, where Holocaust denial is illegal.

Bias can also arise from social processes, as search engine algorithms are often designed to exclude normative views in favor of "popular" results.[49] The major search engines' ranking algorithms tend to cover US websites more than non-US websites. [46]

The Google bombing is an example of an attempt to manipulate search results for political, social or commercial reasons.

Several researchers have examined search engine induced culture change[50] and the presentation of some controversial issues in their results, such as terrorism in Ireland[51], climate change denial[52] and conspiracy theories[53].

Personalized Results and Filter Bubbles

Concerns have been raised that search engines such as Google and Bing offer personalized results based on a user's activity history, leading to what Eli Pariser in 2011 called echo chambers or the filter cup.[54] This is because search engines and social media use algorithms to selectively show what information a user wants to see based on user data (such as location, click behavior, and search history). As a result, websites typically only display information that matches the user's previous view. That means, according to Pariser, users are less exposed to conflicting perspectives and are cognitively isolated in their own information bubble. Since this issue was discovered, competing search engines have emerged trying to circumvent this issue by not tracking or "abandoning" users, such as: B. DuckDuckGo. However, many researchers have challenged Pariser's view and have concluded that there is little evidence for the existence of filter bubbles. [55] [56] [57] In contrast, many studies that have attempted to test the existence of filter bubbles have found little adaptation to school, [57] because most people surf the Internet and Google News tends to known to present mainstream news [57] 58] [56]

Religious search engines

. The global growth of the Internet and electronic media in the Arab and Muslim world over the past decade has encouraged followers of Islam in the Middle East and Peninsular Asia to experiment with their own search engines. , our own site search portals that allow users to continue their search in full protection. More than the usual safe search filters, these Islamic online portals classify websites as either "halal" or "haram" based on their interpretation of "Islamic". Halal was released in September 2011. Halal Googling was released in July 2013. These have haram filters for Google and Bing (and other) collections.[59]

A lack of investment and slow technological development in the Muslim world have hampered progress and prevented the success of an Icelandic search engine targeting followers of Islam with key consumers, but projects like Islam, an Islamic lifestyle site, are worth millions of dollars receive. from investors. like Rite Internet Ventures, and that, too, went haywire. Other religiously oriented search engines include Jewel, a Jewish version of Google [60] and Seek Find.org, which is Christian. Search Finn filters out websites that attack or denigrate his beliefs.[61]

Submission

is the process by which a webmaster submits a website directly to a search engine. Although sometimes referred to as a way to promote a website, submitting to search engines is not usually necessary as major publishers use web crawlers which end up finding most websites online without any help. Using a sitemap, it is possible to submit individual websites or the entire website, but usually submitting the website's home page is sufficient as search engines can index a well-designed website. There are two other reasons to submit a page or site to a search engine: to add a new page without waiting for the search engine to discover it, and to refresh the sidebar after a thorough scan.

Some search engine submission programs not only submit pages to multiple search engines, but also add links to pages from their own sites. This can be considered useful to improve a website's ranking, as external links are one of the most important factors that determine a website's ranking. However, Google's John Mueller said that this "can lead to a large number of natural links pointing to your site", which negatively affects the site's ranking.[62]

Comparison with Social Bookmarking

See also: Social Media Optimization

Compared to search engines, the bookmarking system has several advantages over conventional automated search and ranking programs such as search bots. All biological classification of Internet resources (e.g. web pages) is performed by humans who understand the content of the resource, as opposed to software attempting to calculate the relevance and quality of the resource. In addition, users can search and bookmark websites that have not yet been noticed or indexed by robots.[63] Additionally, a social bookmarking system can rank channels based on the number of user bookmarks, which can be a useful metric for users as systems that rank channels based on the number of external links pointing to them. However, both are vulnerable to leaderboard cheating (see Gaming the System) and both require technical countermeasures to combat this.

Technology

The first Internet search engine was Archie, developed in 1990[64] by Alan ETage, a graduate student at McGill University in Montreal. The author originally wanted to name the program "archive", but had to shorten it in accordance with the international Unix standard of giving programs and files short, cryptographic names such as grep, cat, troff, sed, awk, perl etc. . . . . . .

The primary protocol for storing and retrieving files was the File Transfer Protocol (FTP). It was (and still is) the system that generally defined how computers share files on the Internet. How it works: An administrator decides to grant access to files on the computer. Install a program called FTP server on your computer. When someone on the internet wants to download a file from this computer, they connect with another FTP client. Any FTP client program can connect to any FTP server program as long as both client and server programs fully comply with the HTTP communication protocol requirements.

Originally, anyone who wanted to share a file had to set up an FTP server to make the image available to others. Later, "anonymous" FTP sites became a data store that allowed anyone to download and access them.

Even on the archive pages, many important files were still scattered on small FTP servers. Unfortunately, these files could only be found by word of mouth on the internet: someone emailed a list or message board announcing the image's availability.

Archie changed everything. It combines a linked data collection script that searches a list of anonymous data files with a regular expression to retrieve filenames that match the user's query. (4) In other words, Archie's mailbox scoured the Internet for FTP sites and logged all the files it found. Its regular expression allows users to access the database.[65]

Free Start Counter
This is the title of the web page
Right Click is disabled for the complete web page.
Flag Counter