HTTP

From Citizendium
Revision as of 07:47, 12 June 2011 by imported>Pat Palmer (reworking the intro paragraph; still needs more work)
Jump to navigation Jump to search
This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

HTTP (the Hypertext Transfer Protocol) is a plain-text, client-server messaging protocol on which the World Wide Web is based. Specifically, HTTP is one of several well-known applications which can ride on top of the internet's Transmission Control Protocol and is assigned the well-known TCP default port number 80. Web browsers send HTTP requests to web servers, and web servers send HTTP responses back in reply. The payload sent as part of HTTP requests and responses is also plain text, but using by special encodings, the payload can actually be binary information such as image files (though they must be encoded as ASCII text for transport by HTTP and then decoded back to their binary format on the other end). The same HTTP protocol used by web browsers is also used by search engines to index the World Wide Web, as well as by so-called spam-bots which scrape web pages to obtain information for malicious purposes.

Its original purpose was the transfer of Hypertext Markup Language and other page description methods such as cascading style sheets (CSS). HTTP is a relatively simple protocol, which relies on the Transmission Control Protocol to ensure its traffic is carried, free from errors, over Internet Protocol networks. It works in the same manner if the users or servers are connected to the public Internet, an intranet, or an extranet. HTTP needs to be supplemented to provide security of the message transfer.[1]

The World Wide Web is more than HTML and HTTP alone. It includes a wide range of administrative techniques, performance-enhancing methods such as web caches and content distribution networks, and and has a robust caching system.

History

HTTP was created at CERN by Tim Berners-Lee in the 1980s as a way to share hypertext documents.[2] After 1990, the protocol began to be used by other sites, primarily in the scientific world. Notable developments were the Mosaic web browser and the NCSA HTTPd web server, both developed at the National Center for Supercomputing Applications by Marc Andreessen.

The first (1990) version of HTTP, called HTTP/0.9, was a simple protocol for raw data transfer across the Internet. HTTP/1.0, as defined by RFC 1945 (1996), improved the protocol by allowing messages to be in a self-describing language, HTML, containing metadata about the location of the user-desired information and how to handle the request and response.

Based on experience with the operational Web, however, HTTP/1.0 did not deal well with real-world needs such as hierarchical [[proxy (computer}|proxies]], web caches, the need for persistent communications for long sessions, and virtual web servers. There were enough optional features that a client and server needed to exchange information about their capabilities before the user information transfer could begin. To meet those needs, HTTP/1.1 was developed.[3]

Technical details

The HTTP protocol follows a client-server model, where the client issues a request for a resource to the server. Requests and responses consist of several headers and, optionally, a body. Resources are identified using a URI (Uniform Resource Identifier).

Example conversation

The following is a typical client-server conversation of the kind which is carried out every time a page is loaded in a web browser. In this case, the user has entered the address 'http://www.lth.se/' in his browser and clicked. A 'request' is made by the browser to port 80 on the server www.lth.se, and the server responds with the home page. Comments are in italics and are not part of the actual conversation.

Request:

GET / HTTP/1.1                          Please transmit your root page, using HTTP 1.1,
Host: www.lth.se                        located at www.lth.se.
User-Agent: Mozilla/5.0 (Linux i686)    I am Mozilla 5.0, running on i686 Linux.
Accept: text/html                       I understand HTML-coded documents.
Accept-Language: sv, en-gb              I prefer pages in Swedish and British English.
Accept-Encoding: gzip, deflate          You may compress your content as gzip or deflate, if you wish.
Accept-Charset: utf-8, ISO-8859-1       I understand text encoded in Unicode and Latin-1.
                                        (Empty line signifies end of request)

Response:

HTTP/1.1 200 OK                         Request is valid; I am complying according to HTTP 1.1 (code 200).
Date: Wed, 26 May 2010 10:33:59         The time (where I am) is 10:33 [...].
Server: Apache/2.2.3 (Red Hat)          I am Apache 2.2.3, running on Red Hat Linux.
Content-Length: 54283                   The content you requested is 54,283 bytes long.
Content-Type: text/html; charset=utf-8  Prepare to receive text encoded in Unicode, to be interpreted as HTML.
<html>                                  (The webpage www.lth.se/ follows)
  <head>
    <title>Some page title</title>
...

These are a few additional example responses that could also occur:


Response:

HTTP/1.1 400 Bad Request                Request does not conform to HTTP/1.1, and I did not understand it.
                                        Please do not repeat the request in its current form.

Response:

HTTP/1.1 404 Not Found                  Request comforms with HTTP/1.1, but the resource requested was not found here.
                                        Please do not repeat the request for this resource.

Response:

HTTP/1.1 403 Forbidden                  Request is conformant and valid, but I refuse to comply under HTTP/1.1.
                                        Please do not repeat the request.

Request methods

Clients can use one of eight request methods:

  • HEAD
  • GET
  • POST
  • PUT
  • DELETE
  • TRACE
  • OPTIONS
  • CONNECT

Typically, only GET, HEAD and POST methods are used in web applications, although protocols like WebDAV make use of others.

Status codes

Server responses include a status header, which informs the client whether the request succeeded. The status header is made up of a "status code" and a "reason phrase" (descriptive text).

Status codes classes

Status codes are grouped into classes:

  • 1xx (informational) : Request received, continuing process
  • 2xx (success) : The action was successfully received, understood, and accepted
  • 3xx (redirect) : Further action must be taken in order to complete the request
  • 4xx (client error) : The request contains bad syntax or cannot be fulfilled
  • 5xx (server error) : The server failed to fulfill an apparently valid request.

For example, if the client requests a non-existent document, the status code will be "404 Not Found".

According to the W3C consortium :

HTTP applications are not required to understand the meaning of all registered status codes, though such understanding is obviously desirable. However, applications MUST understand the class of any status code, as indicated by the first digit, and treat any unrecognized response as being equivalent to the x00 status code of that class, with the exception that an unrecognized response MUST NOT be cached.

All W3C status codes

All the codes are described in RFC 2616.

HTTP header and cache management

The HTTP message header includes a number of fields used to facilitate cache management. One of these, Etag (entity tag) is a string valued field that represents a value that should (weak entity tag) or must (strong entity tag) change whenever the page (or other resource) is modified. This allows browsers or other clients to determine whether or not the entire resource needs to be downloaded. The HEAD method, which returns the same message header that would be included in the response to a GET request, can be used to determine if a cached copy of the resource is up to date without actually downloading a new copy. Other elements of the message header can be used, for example, to indicate when a copy should expire (no longer be considered valid), or that it should not be cached at all. This can be useful, for example, when data is generated dynamically (for example, the number of visits to a web site).

HTTP server operations

Multiple virtual servers may map onto a single physical computer. For effective server use, they must be on networks engineered to handle the traffic with them; see port scanning for Internet Service Provider checking for servers placed where traffic can create problems.

References

  1. Rescorla, E. (May 2000), HTTP Over TLS, Internet Engineering Task Force, RFC2818
  2. Berners-Lee, Tim (March 1989), Tim Berners-Lee's proposal: "Information Management: a Proposal"
  3. Fielding, R. et al. (June 1999), Hypertext Transfer Protocol -- HTTP/1.1, Internet Engineering Task Force, RFC2616