Sunday, June 28, 2009

A wire is a single, usually cylindrical, elongated string of metal. Wires are used to bear mechanical loads and to carry electricity and telecommunications signals. Wire is commonly formed by drawing the metal through a hole in a die or draw plate. Standard sizes are determined by various wire gauges. The term wire is also used more loosely to refer to a bundle of such strands, as in 'multistranded wire', which is more correctly termed a wire rope in mechanics, or a cable in electricity.

History
In antiquity, jewellery often contains, in the form of chains and applied decoration, large amounts of wire that is accurately made and which must have been produced by some efficient, if not technically advanced, means. In some cases, strips cut from metal sheet were made by pulling them through perforations in stone beads. This causes the strips to fold round on themselves to form thin tubes. This strip drawing technique was in use in Egypt by the 2nd Dynasty. From the middle of the 2nd millennium BC most of the gold wires in jewellery are characterized by seam lines that follow a spiral path along the wire. Such twisted strips can be converted into solid round wires by rolling them between flat surfaces or the strip wire drawing method. Strip and block twist wire manufacturing methods were still in use in Europe in the 7th century AD, but by this time there seems to be some evidence of wires produced by true drawing.
Square and hexagonal wires were possibly made using a swaging technique. In this method a metal rod was struck between grooved metal blocks, or between a grooved punch and a grooved metal anvil. Swaging is of great antiquity, possibly dating to the beginning of the 2nd millennium BC in Egypt and in the Bronze and Iron Ages in Europe for torches and fibulae.
Twisted square section wires are a very common filigree decoration in early Etruscan jewellery.
In about the middle of the 2nd millennium BC a new category of decorative wires was introduced which imitated a line of granules. Perhaps the earliest such wire is the notched wire which first occurs from the late 3rd, early 2nd millennium BC in Anatolia and occasionally later.
Wire was drawn in England from the medieval period. The wire was used to make wool cards and pins, manufactured goods whose import was prohibited by Edward IV in 1463. The first wire mill in Great Britain was established at Tintern in about 1568 by the founders of the Company of Mineral and Battery Works, who had a monopoly on this. Apart from their second wire mill at nearby Whitebrook, there were no other wire mills before the second half of the 17th century. Despite the existence of mills, the drawing of wire down to fine sizes continued to be done manually.
Wire is usually drawn of cylindrical form; but it may be made of any desired section by varying the outline of the holes in the draw-plate through which it is passed in the process of manufacture. The draw-plate or die is a piece of hard cast-iron or hard steel, or for fine work it may be a diamond or a ruby. The object of utilizing precious stones is to enable the dies to be used for a considerable period without losing their size, and so producing wire of incorrect diameter. Diamond dies must be rebored when they have lost their original diameter of hole, but the metal dies are brought down to size again by hammering up the hole and then drifting it out to correct diameter with a punch.

Uses
Wire has many uses. It forms the raw material of many important manufacturers, such as the wire-net industry, wire-cloth making and wire-rope spinning, in which it occupies a place analogous to a textile fiber. Wire-cloth of all degrees of strength and fineness of mesh is used for sifting and screening machinery, for draining paper pulp, for window screens, and for many other purposes. Vast quantities of aluminum, copper, nickel and steel wire are employed for telephone and data wires and cables, and as conductors in electric power transmission, and heating. It is in no less demand for fencing, and much is consumed in the construction of suspension bridges, and cages, etc. In the manufacture of stringed musical instruments and scientific instruments wire is again largely used. Among its other sources of consumption it is sufficient to mention pin and hair-pin making, the needle and fish-hook industries, nail, peg and rivet making, and carding machinery; indeed there are few industries into which it does not enter.
Not all metals and metallic alloys possess the physical properties necessary to make useful wire. The metals must in the first place be ductile and strong in tension, the quality on which the utility of wire principally depends. The metals suitable for wire, possessing almost equal ductility, are platinum, silver, iron, copper, aluminum and gold; and it is only from these and certain of their alloys with other metals, principally brass and bronze, that wire is prepared. By careful treatment extremely thin wire can be produced. Special purpose wire is however made from other metals (e.g. tungsten wire for light bulb and vacuum tube filaments, because of its high melting temperature). Copper wires could be plated with other metals, such as tin, nickel, and silver to handle different temperatures.

Monday, June 22, 2009

In computer networks, a proxy server is a server (a computer system or an application program) that acts as a go-between for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.
A proxy server has two purposes:
To keep machines behind it anonymous (mainly for security).
To speed up access to a resource (via caching). It is commonly used to cache web pages from a web server.
A proxy server that passes requests and replies unmodified is usually called a gateway or sometimes tunneling proxy.
A proxy server can be placed in the user's local computer or at various points between the user and the destination servers or the Internet. A reverse proxy is a proxy used as a front-end to accelerate and cache in-demand resources (such as a web page).

Types and functions
Proxy servers implement one or more of the following functions:

Caching proxy server
A caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These machines are built to deliver superb file system performance (often with RAID and journaling) and also contain hot-rodded versions of TCP. Caching proxies were the first kind of proxy server.
Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching Problems).
Another important use of the proxy server is to reduce the hardware cost. An organization may have many systems on the same network or under control of a single server, prohibiting the possibility of an individual connection to the Internet for each system. In such a case, the individual systems can be connected to one proxy server, and the proxy server connected to the main server.

Web proxy
A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web proxy is to serve as a web cache. Most proxy programs (e.g. Squid) provide a means to deny access to certain URLs in a blacklist, thus providing content filtering. This is often used in a corporate, educational or library environment, and anywhere else where content filtering is desired. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell phones and PDAs).
AOL dialup customers used to have their requests routed through an extensible proxy that 'thinned' or reduced the detail in JPEG pictures. This sped up performance but caused problems, either when more resolution was needed or when the thinning program produced incorrect results. This is why in the early days of the web many web pages would contain a link saying "AOL Users Click Here" to bypass the web proxy and to avoid the bugs in the thinning software.[citation needed]

Content-filtering web proxy
Further information: Content-control software
A content-filtering web proxy server provides administrative control over the content that may be relayed through the proxy. It is commonly used in commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. However in some cases users who disagree with the policy can figure out ways to revolt by bypassing a proxy (particularly a software-based one, by downloading and using their own proxy).
Some common methods used for content filtering include: URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Some products have been known to employ content analysis techniques to look for traits commonly used by certain types of content providers.
A content filtering proxy will often support user authentication, to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users, or to monitor bandwidth usage statistics. It may also communicate to daemon based and/or ICAP based antivirus software to provide security against virus and other malware by scanning incoming content in real time before it enters the network.

Anonymizing proxy server
An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. There are different varieties of anonymizers. One of the more common variations is the open proxy. Because they are typically difficult to track, open proxies are especially useful to those seeking online anonymity, from political dissidents to computer criminals. Some users are merely interested in anonymity on principle, to facilitate constitutional human rights of freedom of speech, for instance. The server receives requests from the anonymizing proxy server, and thus does not receive information about the end user's address. However, the requests are not anonymous to the anonymizing proxy server, and so a degree of trust is present between that server and the user. Many of them are funded through a continued advertising link to the user.
Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals.
Some anonymizing proxy servers may forward data packets with header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high anonymity proxies, only include the REMOTE_ADDR header with the IP address of the proxy server, making it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets which include a cookie from a previous visit that did not use the high anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem.

Hostile proxy
Proxies can also be installed in order to eavesdrop upon the dataflow between client machines and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL.

Intercepting proxy server
An intercepting proxy (also known as a "transparent proxy") combines a proxy server with a gateway. Connections made by client browsers through the gateway are redirected through the proxy without client-side configuration (or often knowledge).
Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use policy, and to ease administrative burden, since no client browser configuration is required.
It is often possible to detect the use of an intercepting proxy server by comparing the external IP address to the address seen by an external web server, or by examining the HTTP headers on the server side.

Transparent and non-transparent proxy server
The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy" (because the client does not need to configure a proxy and cannot directly detect that its requests are being proxied). Transparent proxies can be implemented using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways: GRE Tunneling (OSI Layer 3) or MAC rewrites (OSI Layer 2).
However, RFC 2616 (Hypertext Transfer Protocol—HTTP/1.1) offers different definitions:
"A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification".
"A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering".

Forced proxy
The term "forced proxy" is ambiguous. It means both "intercepting proxy" (because it filters all traffic on the only available gateway to the Internet) and its exact opposite, "non-intercepting proxy" (because the user is forced to configure a proxy in order to access the Internet).
Forced proxy operation is sometimes necessary due to issues with the interception of TCP connections and HTTP. For instance, interception of HTTP requests can affect the usability of a proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because the client thinks it is talking to a server, and so request headers required by a proxy are unable to be distinguished from headers that may be required by an upstream server (esp authorization headers). Also the HTTP specification prohibits caching of responses where the request contained an authorization header.

Suffix proxy
A suffix proxy server allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.6a.nl").
Suffix proxy servers are easier to use than regular proxy servers. The concept appeared in 2003 in form of the IPv6Gate and in 2004 in form of the Coral Content Distribution Network, but the term suffix proxy was only coined in October 2008 by "6a.nl"[citation needed].

Open proxy server
Main article: Open proxy
Because proxies might be used to abuse, system administrators have developed a number of ways to refuse service to open proxies. Many IRC networks automatically test client systems for known types of open proxy. Likewise, an email server may be configured to automatically test e-mail senders for open proxies.
Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and SORBS.
The ethics of automatically testing clients for open proxies are controversial. Some experts, such as Vernon Schryver, consider such testing to be equivalent to an attacker portscanning the client host. Others consider the client to have solicited the scan by connecting to a server whose terms of service include testing.

Reverse proxy server
Main article: Reverse proxy
A reverse proxy is a proxy server that is installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. There are several reasons for installing reverse proxy servers:
Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. See Secure Sockets Layer. Furthermore, a host can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing the need for a separate SSL Server Certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections.
Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations).
Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content.
Compression: the proxy server can optimize and compress the content to speed up the load time.
Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages.
Security: the proxy server is an additional layer of defense and can protect against some OS and WebServer specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat.
Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet.

Monday, June 15, 2009

The Washington Metro (officially Metrorail but commonly referred to as just Metro) is the rapid transit system in Washington, D.C. and its surrounding suburbs. The system is administered by the Washington Metropolitan Area Transit Authority (WMATA). In Maryland, Metro provides service to Montgomery County and Prince George's County. In Virginia, service extends to Fairfax County, Arlington County, and the City of Alexandria. Since opening in 1976, the Metrorail network has grown to include five lines, 86 stations, and 106.3 miles (171.1 km) of track.
Metrorail is the second-busiest rapid transit system in the United States, in number of passenger trips, after the New York City Subway. There were 215.3 million trips, or 727,684 trips per weekday, on Metrorail in fiscal year 2008. In June 2008, Metrorail set a new monthly ridership record with 19,729,641 trips, or 798,456 per weekday.Fares vary based on the distance traveled and the time of day. Riders enter and exit the system using a stored-value card in the form of a paper magnetic stripe farecard or a proximity card known as SmarTrip.
Metrorail stations were designed by Chicago architect Harry Weese, and are an example of late-20th century modern architecture. With their heavy use of concrete and repetitive design motifs, Metro stations also display aspects of brutalist design. In 2007, the design of the Metro's vaulted-ceiling stations was voted number 106 on the American Institute of Architects' list of America's Favorite Architecture.

History

During the 1960s, there were plans for a massive freeway system in Washington. However, opposition to this freeway system grew. Harland Bartholomew, who chaired the National Capital Planning Commission, thought that a rail transit system would never be self-sufficient because of low density land uses and general transit ridership decline. Finally, a mixed concept of a Capital Beltway system along with rail line radials was agreed upon. The Beltway received full funding; monies for the ambitious Inner Loop Freeway system were partially reallocated toward construction of the Metro system.
In 1960, the federal government created the National Capital Transportation Agency to develop a rapid rail system. In 1966, a bill creating WMATA was passed by the federal government, the District of Columbia, Virginia, and Maryland, with planning power for the system being transferred to it from the NCTA.
WMATA approved plans for a 98-mile (158 km) regional system in 1968,and construction on the metro began in 1969, with groundbreaking on December 9. The system opened March 27, 1976, with 4.6 miles (7 kilometers) available on the Red Line with five stations from Rhode Island Avenue to Farragut North, all in the District of Columbia. Arlington County, Virginia was linked to the system on July 1, 1976; Montgomery County, Maryland on February 6, 1978; Prince George's County, Maryland on November 20, 1978; and Fairfax County, Virginia and Alexandria, Virginia on December 17, 1983.
The final 103-mile (166 km), 83 station system was completed with the opening of the Green Line segment to Branch Avenue on January 13, 2001. This did not mean the end of the growth of the system, however: a 3.22-mile (5.18 km) extension of the Blue Line to Largo Town Center and Morgan Boulevard stations opened on December 18, 2004. The first in-fill station (New York Ave-Florida Ave-Gallaudet U on the Red Line between Union Station and Rhode Island Ave-Brentwood) opened November 20, 2004, and planning is underway for an extension to Dulles Airport.
Metro system construction required billions of federal dollars, originally provided by Congress under the authority of the National Capital Transportation Act of 1969 (Public Law 91-143). This act was subsequently amended on January 3, 1980 by Public Law 96-184, "The National Capital Transportation Amendment of 1979" (also known as the Stark-Harris Act), which authorized additional funding in the amount of $1.7 billion to permit the completion of 89.5 miles (144.0 km) of the Metrorail system as provided under the terms of a full funding grant agreement executed with WMATA in July 1986. On November 15, 1990, Public Law 101-551, "The National Capital Transportation Amendments of 1990", authorized spending of an additional $1.3 billion in federal funds to finance construction of the remaining 13.5 miles (21.7 km) of the 103-mile (166 km) system, completed via the execution of full funding grant agreements.


Metrorail network


Since opening in 1976, the Metrorail network has grown to include five lines, 86 stations, and 106.3 miles (171.1 km) of track. The rail network is designed according to a spoke-hub distribution paradigm, with rail lines running between downtown Washington and its nearby suburbs. The system makes extensive use of interlining (i.e., running more than one service on the same track). There are five operating lines and one line under construction.


There are currently 40 stations in the District of Columbia, 14 in Prince George's County, 12 in Montgomery County, 11 in Arlington County, 6 in Fairfax County, and 3 in the City of Alexandria. When completed, the Silver Line will add 11 new stations to the system, 8 in Fairfax County and 3 in Loudoun County, Virginia.
About 50 miles (80 km) of Metro's track is underground, as are 47 of the system's 86 stations. Track runs underground mostly within the District and high-density suburbs. Surface track accounts for about 46 miles (74 km) of the system's total, and aerial track makes up 9 miles (14 km). At 196 feet (60 m) below the surface, the Forest Glen station on the Red Line is the deepest in the system. There are no escalators; high-speed elevators take 20 seconds to travel from the street to the station platform. Wheaton station, next to Forest Glen station on the Red Line, has the second-longest continuous escalator in the world, the longest in the Western Hemisphere, at 230 feet (70 m).[1] The Rosslyn station is the deepest station on the Orange/Blue Line, at 97 feet (30 m) below street level. The station features the third-longest continuous escalator in the world at 205 feet (62 m); an escalator ride between the street level and the mezzanine level takes nearly two minutes.
The system is not centered on any single station, but Metro Center is located at the intersection of the Red, Orange, and Blue Lines, the three busiest lines in the system. The station is also the location of WMATA's main sales office. Metro has designated five other "core stations" that each have high passenger volume, including:[15] Gallery Place–Chinatown, transfer station for the Red, Green, and Yellow Lines; L'Enfant Plaza, transfer station for the Orange, Blue, Green, and Yellow Lines; Union Station, the busiest station by passenger boardings; Farragut North; and Farragut West. In order to deal with the high number of passengers in transfer stations, Metro is studying the possibility of building pedestrian connections between nearby core transfer stations. For example, a 750-foot (230 m) passage between Metro Center and Gallery Place stations would allow passengers to transfer between the Orange/Blue and Yellow/Green Lines without going one stop on the Red Line. Another tunnel between Farragut West and Farragut North stations would allow transfers between the Red and Orange/Blue lines, decreasing transfer demand at Metro Center by an estimated 11%.
Metro runs special service patterns on holidays and when events in Washington may require additional rail service. Independence Day activities require Metro to adjust service in order to provide extra capacity to and from the National Mall. WMATA makes similar adjustments during other events, such as presidential inaugurations. Metro has altered service and used some stations as entrances or exits only to help manage congestion.

Wednesday, June 10, 2009

The World Trade Center (sometimes referred to as WTC or Twin Towers) was a complex in Lower Manhattan whose seven buildings were destroyed in 2001 in the September 11 attacks. The site is currently being rebuilt with six new skyscrapers and a memorial to the casualties of the attacks.
The original World Trade Center was designed by
Minoru Yamasaki in the early 1960s using a tube-frame structural design for the twin 110-story towers. In gaining approval for the project, the Port Authority of New York and New Jersey agreed to take over the Hudson & Manhattan Railroad which became the Port Authority Trans-Hudson (PATH). Groundbreaking for the World Trade Center took place on August 5, 1966. The North Tower (1) was completed in December 1970 and the South Tower (2) was finished in July 1971. Construction of the World Trade Center involved excavating a large amount of material which was used in making Battery Park City on the west side of Lower Manhattan.
The complex was located in the heart of
New York City's downtown financial district and contained 13.4 million square feet (1.24 million m2) of office space. The Windows on the World restaurant was located on the 106th and 107th floors of the North Tower, while the Top of the World observation deck was located on the 107th floor of the South Tower. Other World Trade Center buildings included the Marriott World Trade Center; 6 World Trade Center, which housed the United States Customs; and 7 World Trade Center, which was built in the 1980s. The World Trade Center experienced a fire on February 13, 1975 and a bombing on February 26, 1993. In 1998, the Port Authority decided to privatize the World Trade Center, leasing the buildings to a private company to manage, and awarded the lease to Silverstein Properties in July 2001.
On the morning of September 11, 2001,
al-Qaeda-affiliated hijackers flew two 767 jets into the complex, one into each tower, in a coordinated suicide attack. After burning for 59 minutes, the South Tower (2) collapsed, followed a half-hour later by the North Tower (1), with the attacks on the World Trade Center resulting in 2,750 deaths. 7 World Trade Center collapsed later in the day and the other buildings, though they didn't collapse, had to be demolished because they were damaged beyond repair. The process of cleanup and recovery at the World Trade Center site took eight months. The first new building at the site was 7 World Trade Center which opened in May 2006. The Lower Manhattan Development Corporation (LMDC), established in November 2001 to oversee the rebuilding process, organized competitions to select a site plan and memorial design. Memory Foundations, designed by Daniel Libeskind, was selected as the master plan, which included the 1,776-foot (541 m) 1 World Trade Center, three office towers along Church Street and a memorial designed by Michael Arad.




Planning and construction
The idea of establishing a
World Trade Center in New York City was first proposed in 1946. The New York State Legislature passed a bill authorizing New York Governor Thomas E. Dewey to begin developing plans for the project but the plans were put on hold in 1949.During the late 1940s and 1950s, economic growth in New York City was concentrated in Midtown Manhattan, while Lower Manhattan was left out. To help stimulate urban renewal, David Rockefeller suggested that the Port Authority build a World Trade Center in Lower Manhattan.
Initial plans, made public in 1961, identified a site along the
East River for the World Trade Center. As a bi-state agency, the Port Authority required approval from both the governors of New York and New Jersey in order to undertake new projects. New Jersey Governor Robert B. Meyner objected to New York getting a $335 million project. Toward the end of 1961, negotiations with outgoing New Jersey Governor Meyner reached a stalemate.
At the time, ridership on New Jersey's
Hudson and Manhattan Railroad (H&M) had declined substantially from a high of 113 million riders in 1927 to 26 million in 1958 after new automobile tunnels and bridges had opened across the Hudson River. In a December 1961 meeting between Port Authority director Austin J. Tobin and newly elected New Jersey Governor Richard J. Hughes, the Port Authority offered to take over the Hudson & Manhattan Railroad to have it become the Port Authority Trans-Hudson (PATH). The Port Authority also decided to move the World Trade Center project to the Hudson Terminal building site on the west side of Lower Manhattan, a more convenient location for New Jersey commuters arriving via PATH. With the new location and Port Authority acquisition of the H&M Railroad, New Jersey agreed to support the World Trade Center project.
Approval was also needed from New York City Mayor
John Lindsay and the New York City Council. Disagreements with the city centered on tax issues. On August 3, 1966, an agreement was reached that the Port Authority would make annual payments to the City in lieu of taxes for the portion of the World Trade Center leased to private tenants. In subsequent years, the payments would rise as the real estate tax rate increased.




Structural design
The structural engineering firm Worthington, Skilling, Helle & Jackson worked to implement Yamasaki's design, developing the
tube-frame structural system used in the twin towers. The Port Authority's Engineering Department served as foundation engineers, Joseph R. Loring & Associates as electrical engineers, and Jaros, Baum & Bolles as mechanical engineers. Tishman Realty & Construction Company was the general contractor on the World Trade Center project. Guy F. Tozzoli, director of the World Trade Department at the Port Authority, and Rino M. Monti, the Port Authority's Chief Engineer, oversaw the project. As an interstate agency, the Port Authority was not subject to local laws and regulations of the City of New York including building codes. Nonetheless, the structural engineers of the World Trade Center ended up following draft versions of the new 1968 building codes.
The tube-frame design, earlier introduced by
Fazlur Khan, was a new approach which allowed open floor plans rather than columns distributed throughout the interior to support building loads as had traditionally been done. The World Trade Center towers utilized high-strength, load-bearing perimeter steel columns called Vierendeel trusses that were spaced closely together to form a strong, rigid wall structure, supporting virtually all lateral loads such as wind loads, and sharing the gravity load with the core columns. The perimeter structure containing 59 columns per side was constructed with extensive use of prefabricated modular pieces each consisting of three columns, three stories tall, connected by spandrel plates. The spandrel plates were welded to the columns to create the modular pieces off-site at the fabrication shop. Adjacent modules were bolted together with the splices occurring at mid-span of the columns and spandrels. The spandrel plates were located at each floor, transmitting shear stress between columns, allowing them to work together in resisting lateral loads. The joints between modules were staggered vertically so the column splices between adjacent modules were not at the same floor.
The core of the towers housed the elevator and utility shafts, restrooms, three stairwells, and other support spaces. The core of each tower was a rectangular area 87 by 135 feet (27 by 41 m) and contained 47 steel columns running from the bedrock to the top of the tower. The large, column-free space between the perimeter and core was bridged by prefabricated floor trusses. The floors supported their own weight as well as
live loads, providing lateral stability to the exterior walls and distributing wind loads among the exterior walls. The floors consisted of 4 inch (10 cm) thick lightweight concrete slabs laid on a fluted steel deck. A grid of lightweight bridging trusses and main trusses supported the floors. The trusses connected to the perimeter at alternate columns and were on 6 foot 8 inch (2.03 m) centers. The top chords of the trusses were bolted to seats welded to the spandrels on the exterior side and a channel welded to the core columns on the interior side. The floors were connected to the perimeter spandrel plates with viscoelastic dampers which helped reduce the amount of sway felt by building occupants. The trusses supported a 4-inch (100 mm) thick lightweight concrete floor slab with shear connections for composite action.
Hat
trusses (or "outrigger truss") located from the 107th floor to the top of the buildings were designed to support a tall communication antenna on top of each building.Only 1 WTC (north tower) actually had an antenna fitted; it was added in 1978. The truss system consisted of six trusses along the long axis of the core and four along the short axis. This truss system allowed some load redistribution between the perimeter and core columns and supported the transmission tower.
The tube frame design using steel core and perimeter columns protected with sprayed-on fire resistant material created a relatively lightweight structure that would sway more in response to the wind compared to traditional structures such as the
Empire State Building that have thick, heavy masonry for fireproofing of steel structural elements. During the design process, wind tunnel tests were done to establish design wind pressures that the World Trade Center towers could be subjected to and structural response to those forces. Experiments also were done to evaluate how much sway occupants could comfortably tolerate, however, many subjects experienced dizziness and other ill effects. One of the chief engineers Leslie Robertson worked with Canadian engineer Alan G. Davenport to develop viscoelastic dampers to absorb some of the sway. These viscoelastic dampers, used throughout the structures at the joints between floor trusses and perimeter columns along with some other structural modifications, reduced the building sway to an acceptable level.


Friday, June 5, 2009

Read-only memory (usually known by its acronym, ROM) is a class of storage media used in computers and other electronic devices. Because data stored in ROM cannot be modified (at least not very quickly or easily), it is mainly used to distribute firmware (software that is very closely tied to specific hardware, and unlikely to require frequent updates).
In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state ROM), which is fabricated with the desired data permanently stored in it, and thus can never be modified. However, more modern types such as EPROM and flash EEPROM can be erased and re-programmed multiple times; they are still described as "read-only memory"(ROM) because the reprogramming process is generally infrequent, comparatively slow, and often does not permit random access writes to individual memory locations. Despite the simplicity of mask ROM, economies of scale and field-programmability often make reprogrammable technologies more flexible and inexpensive, so mask ROM is rarely used in new products as of 2007.

History
The simplest type of solid state ROM is as old as semiconductor technology itself. Combinational logic gates can be joined manually to map n-bit address input onto arbitrary values of m-bit data output (a look-up table). With the invention of the integrated circuit came mask ROM. Mask ROM consists of a grid of word lines (the address input) and bit lines (the data output), selectively joined together with transistor switches, and can represent an arbitrary look-up table with a regular physical layout and predictable propagation delay.
In mask ROM, the data is physically encoded in the circuit, so it can only be programmed during fabrication. This leads to a number of serious disadvantages:
It is only economical to buy mask ROM in large quantities, since users must contract with a foundry to produce a custom design.
The turnaround time between completing the design for a mask ROM and receiving the finished product is long, for the same reason.
Mask ROM is impractical for R&D work since designers frequently need to modify the contents of memory as they refine a design.
If a product is shipped with faulty mask ROM, the only way to fix it is to recall the product and physically replace the ROM.
Subsequent developments have addressed these shortcomings. PROM, invented in 1956, allowed users to program its contents exactly once by physically altering its structure with the application of high-voltage pulses. This addressed problems 1 and 2 above, since a company can simply order a large batch of fresh PROM chips and program them with the desired contents at its designers' convenience. The 1971 invention of EPROM essentially solved problem 3, since EPROM (unlike PROM) can be repeatedly reset to its unprogrammed state by exposure to strong ultraviolet light. EEPROM, invented in 1983, went a long way to solving problem 4, since an EEPROM can be programmed in-place if the containing device provides a means to receive the program contents from an external source (e.g. a personal computer via a serial cable). Flash memory, invented at Toshiba in the mid-1980s, and commercialized in the early 1990s, is a form of EEPROM that makes very efficient use of chip area and can be erased and reprogrammed thousands of times without damage.
All of these technologies improved the flexibility of ROM, but at a significant cost-per-chip, so that in large quantities mask ROM would remain an economical choice for many years. (Decreasing cost of reprogrammable devices had almost eliminated the market for mask ROM by the year 2000.) Furthermore, despite the fact that newer technologies were increasingly less "read-only," most were envisioned only as replacements for the traditional use of mask ROM.


Use of ROM for program storage
Every stored-program computer requires some form of [[non-volatile]or erasable] storage to store the initial program that runs when the computer is powered on or otherwise begins execution (a process known as bootstrapping, often abbreviated to "booting" or "booting up"). Likewise, every non-trivial computer requires some form of mutable memory to record changes in its state as it executes.
Forms of read-only memory were employed as non-volatile storage for programs in most early stored-program computers, such as ENIAC after 1948 (until then it was not a stored-program computer as every program had to be manually wired into the machine, which could take days to weeks). Read-only memory was simpler to implement since it required only a mechanism to read stored values, and not to change them in-place, and thus could be implemented with very crude electromechanical devices (see historical examples bel20 transistors) to retain its contents, while a ROM cell might consist of the absence (logical 0) or presence (logical 1) of a single transistor connecting a bit line to a word line.[2] Consequently, ROM could be implemented at a lower cost-per-bit than RAM for many years.
Most home computers of the 1980s stored a BASIC interpreter or operating system in ROM as other forms of non-volatile storage such as magnetic disk drives were too expensive. For example, the Commodore 64 included 64 KiB of RAM and 20 KiB of ROM contained a BASIC interpreter and the "KERNAL" (sic) of its operating system. Later home or office computers such as the IBM PC XT often included magnetic disk drives, and larger amounts of RAM, allowing them to load their operating systems from disk into RAM, with only a minimal hardware initialization core and bootloader remaining in ROM (known as the BIOS in IBM-compatible computers). This arrangement allowed for a more complex and easily upgradeable operating system.
In modern PCs, "ROM" (or Flash) is used to store the basic bootstrapping firmware for the main processor, as well as the various firmware needed to internally control self contained devices such as graphic cards, hard disks, DVD drives, TFT screens, etc, in the system. Today, many of these "read-only" memories – especially the BIOS – are often replaced with Flash memory (see below), to permit in-place reprogramming should the need for a firmware upgrade arise. However, simple and mature sub-systems (such as the keyboard or some communication controllers in the ICs on the main board, for example) may employ mask ROM or OTP (one time programmable).ow).

Types of ROMs

Semiconductor based
Classic mask-programmed ROM chips are integrated circuits that physically encode the data to be stored, and thus it is impossible to change their contents after fabrication. Other types of non-volatile solid-state memory permit some degree of modification:
Programmable read-only memory (PROM), or one-time programmable ROM (OTP), can be written to or programmed via a special device called a PROM programmer. Typically, this device uses high voltages to permanently destroy or create internal links (fuses or antifuses) within the chip. Consequently, a PROM can only be programmed once.
Erasable programmable read-only memory (EPROM) can be erased by exposure to strong ultraviolet light (typically for 10 minutes or longer), then rewritten with a process that again requires application of higher than usual voltage. Repeated exposure to UV light will eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000 cycles of erasing and reprogramming. EPROM chip packages can often be identified by the prominent quartz "window" which allows UV light to enter. After programming, the window is typically covered with a label to prevent accidental erasure. Some EPROM chips are factory-erased before they are packaged, and include no window; these are effectively PROM.
Electrically erasable programmable read-only memory (EEPROM) is based on a similar semiconductor structure to EPROM, but allows its entire contents (or selected banks) to be electrically erased, then rewritten electrically, so that they need not be removed from the computer (or camera, MP3 player, etc.). Writing or flashing an EEPROM is much slower (milliseconds per bit) than reading from a ROM or writing to a RAM (nanoseconds in both cases).
Electrically alterable read-only memory (EAROM) is a type of EEPROM that can be modified one bit at a time. Writing is a very slow process and again requires higher voltage (usually around 12 V) than is used for read access. EAROMs are intended for applications that require infrequent and only partial rewriting. EAROM may be used as non-volatile storage for critical system setup information; in many applications, EAROM has been supplanted by CMOS RAM supplied by mains power and backed-up with a lithium battery.
Flash memory (or simply flash) is a modern type of EEPROM invented in 1984. Flash memory can be erased and rewritten faster than ordinary EEPROM, and newer designs feature very high endurance (exceeding 1,000,000 cycles). Modern NAND flash makes efficient use of silicon chip area, resulting in individual ICs with a capacity as high as 16 GB as of 2007[update]; this feature, along with its endurance and physical durability, has allowed NAND flash to replace magnetic in some applications (such as USB flash drives). Flash memory is sometimes called flash ROM or flash EEPROM when used as a replacement for older ROM types, but not in applications that take advantage of its ability to be modified quickly and frequently.