Server Slots Was Ist Das

Posted onby admin

I feel that DAS is more of 'fire-fighting' approach when the physical server needs more local storage but does not have more HDD/SSD slots. Even Dell 1U servers can use up to 8x 2.5' HDDs/SSDs.so it might be good to think of using 6 slots or all 8 slots during purchase. Else Dell 2U servers with like 10-16 slots.

  1. Server Slots Was Ist Das Eine
  2. Server Slots Was Ist Das Erste
  3. Server Slots Was Ist Das Taipas

Die Alternative zu einem Bare-Metal-Server ist ein Hypervisor-Server, bei dem mehrere Nutzer die Rechen-, Speicher- und anderen Ressourcen eines virtuellen Servers gemeinsam nutzen. Bare-Metal-Server werden auch als physische Single-Tenant-Server oder dedizierte Managed-Server bezeichnet. Auf Bare-Metal-Servern wird das Betriebssystem direkt. A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. This allows a server administrator to build a massive DAS storage system very inexpensively for applications like iSCSI, backup storage, media storage, virtual machine storage, and etc. Oftentimes, the ensuing research will lead IT professionals to JBOD DAS enclosures with SAS expanders built in. Learn the translation for ‘slot’ in LEO’s English ⇔ German dictionary. With noun/verb tables for the different cases and tenses links to audio pronunciation and relevant forum discussions free vocabulary trainer.

Server slots was ist das eine
Supermicro SBI-7228R-T2X blade server, containing two dual-CPU server nodes

A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer.[1] Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.

In a standard server-rack configuration, one rack unit or 1U—19 inches (480 mm) wide and 1.75 inches (44 mm) tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of 2014, densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems.[2]

Blade enclosure[edit]

Enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor.

HP BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below

Power[edit]

Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers may have redundant power supplies, again adding to the bulk and heat output of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures.[3][4] This setup reduces the number of PSUs required to provide a resilient power supply.

The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as the BladeUPS).

Cooling[edit]

During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans.

A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling systems[5][6] that adjust to meet the system's cooling requirements.

At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack-mount servers.[7]

Networking[edit]

Blade servers generally include integrated or optional network interface controllers for Ethernet or host adapters for Fibre Channel storage systems or converged network adapter to combine storage and data via one Fibre Channel over Ethernet interface. In many blades, at least one interface is embedded on the motherboard and extra interfaces can be added using mezzanine cards.

A blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in networking blades.[8][9]

Storage[edit]

While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, E-SATA, SCSI, SASDAS, FC and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.

The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade, an example of which implementation is the Intel Modular Server System.

Other blades[edit]

Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure.

Systems administrators can use storage blades where a requirement exists for additional local storage.[10][11][12]

Uses[edit]

Cray XC40 supercomputer cabinet with 48 blades, each containing 4 nodes with 2 CPUs each

Blade servers function well for specific purposes such as web hosting, virtualization, and cluster computing. Individual blades are typically hot-swappable. As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers.Although blade-server technology in theory allows for open, cross-vendor systems, most users buy modules, enclosures, racks and management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers;[13][14] as of 2009 increasing numbers of third-party software vendors have started to enter this growing field.[15]

Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the heating, ventilation, and air conditioning problems that affect large conventional server farms.

History[edit]

Developers first placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s, soon after the introduction of 8-bit microprocessors. This architecture was used in the industrial process control industry as an alternative to minicomputer-based control systems. Early models stored programs in EPROM and were limited to a single function with a small real-time executive.

The VMEbus architecture (c. 1981) defined a computer interface that included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing.

In the 1990s, the PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus PCI called CompactPCI. CompactPCI was actually invented by Ziatech Corp of San Luis Obispo, CA and developed into an industry standard. Common among these chassis-based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one master board in charge, or two redundant fail-over masters coordinating the operation of the entire system. Moreover this system architecture provided management capabilities not present in typical rack mount computers, much more like in ultra-high reliability systems, managing power supplies, cooling fans as well as monitoring health of other internal components.

Demands for managing hundreds and thousands of servers in the emerging Internet Data Centers where the manpower simply didn't exist to keep pace a new server architecture was needed. In 1998 and 1999 this new Blade Server Architecture was developed at Ziatech based on their Compact PCI platform to house as many as 14 'blade servers' in a standard 19' 9U high rack mounted chassis, allowing in this configuration as many as 84 serves in a standard 84 Rack Unit 19' rack. What this new architecture brought to the table was a set of new interfaces to the hardware specifically to provide the capability to remotely monitor the health and performance of all major replaceable modules that could be changed/replaced while the system was in operation. The ability to change/replace or add modules within the system while it is in operation is known as Hot-Swap. Unique to any other server system the Ketris Blade servers routed Ethernet across the backplane (where server blades would plug-in) eliminating more than 160 cables in a single 84 Rack Unit high 19' rack. For a large data center tens of thousands of Ethernet cables, prone to failure would be eliminated. Further this architecture provided the capabilities to inventory modules installed in the system remotely in each system chassis without the blade servers operating. This architecture enabled the ability to provision (power up, install operating systems and applications software) (e.g. a Web Servers) remotely from a Network Operations Center (NOC). The system architecture when this system was announced was called Ketris, named after the Ketri Sword, worn by nomads in such a way as to be drawn very quickly as needed. First envisioned by Dave Bottom and developed by an engineering team at Ziatech Corp in 1999 and demonstrated at the Networld+Interop show in May 2000. Patents (site patents) were awarded for the Ketris blade server architecture. In October 2000 Ziatech was acquired by Intel Corp and the Ketris Blade Server systems would become a product of the Intel Network Products Group.[citation needed]

Server Slots Was Ist Das Eine

PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001.[16] This provided the first open architecture for a multi-server chassis.

The Second generation of Ketris would be developed at Intel as an architecture for the telecommunications industry to support the build out of IP base telecom services and in particular the LTE (Long Term Evolution) Cellular Network build-out. PICMG followed with this larger and more feature-rich AdvancedTCA specification, targeting the telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, the operating cost (manpower to manage and maintain) are dramatically lower, where operating cost often dwarf the acquisition cost for traditional servers. AdvancedTCA promote them for telecommunications customers, however in the real world implementation in Internet Data Centers where thermal as well as other maintenance and operating cost had become prohibitively expensive, this blade server architecture with remote automated provisioning, health and performance monitoring and management would be a significantly less expensive operating cost.[clarification needed]

The first commercialized blade-server architecture[citation needed] was invented by Christopher Hipp and David Kirkeby, and their patent (US 6411506) was assigned to Houston-based RLX Technologies.[17] RLX, which consisted primarily of former Compaq Computer Corporation employees, including Hipp and Kirkeby, shipped its first commercial blade server in 2001.[18] RLX was acquired by Hewlett Packard in 2005.[19]

The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage (flash memory or small hard disk(s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card/board/blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis.

In 2011, research firm IDC identified the major players in the blade market as HP, IBM, Cisco, and Dell.[20] Other companies selling blade servers include Supermicro, Hitachi.

Blade models[edit]

Cisco UCS blade servers in a chassis

Though independent professional computer manufacturers such as Supermicro offer blade servers, the market is dominated by large public companies such as Cisco Systems, which had 40% share by revenue in Americas in the first quarter of 2014.[21] The remaining prominent brands in the blade server market are HPE, Dell and IBM, though the latter sold its x86 server business to Lenovo in 2014 after selling its consumer PC line to Lenovo in 2005.[22]

In 2009, Cisco announced blades in its Unified Computing System product line, consisting of 6U high chassis, up to 8 blade servers in each chassis. It has a heavily modified Nexus 5K switch, rebranded as a fabric interconnect, and management software for the whole system.[23]HP's line consists of two chassis models, the c3000 which holds up to 8 half-height ProLiant line blades (also available in tower form), and the c7000 (10U) which holds up to 16 half-height ProLiant blades. Dell's product, the M1000e is a 10U modular enclosure and holds up to 16 half-height PowerEdge blade servers or 32 quarter-height blades.

See also[edit]

  • Mobile PCI Express Module (MXM)

References[edit]

  1. ^'Data Center Networking – Connectivity and Topology Design Guide'(PDF). Enterasys Networks, Inc. 2011. Archived from the original(PDF) on 2013-10-05. Retrieved 2013-09-05.
  2. ^'HP updates Moonshot server platform with ARM and AMD Opteron hardware'. www.v3.co.uk. 9 Dec 2013. Retrieved 2014-04-25.
  3. ^'HP BladeSystem p-Class Infrastructure'. Archived from the original on 2006-05-18. Retrieved 2006-06-09.
  4. ^Sun Blade Modular System
  5. ^Sun Power and Cooling
  6. ^HP Thermal Logic technology
  7. ^'HP BL2x220c'. Archived from the original on 2008-08-29. Retrieved 2008-08-21.
  8. ^Sun Independent I/O
  9. ^HP Virtual Connect
  10. ^IBM BladeCenter HS21Archived October 13, 2007, at the Wayback Machine
  11. ^'HP storage blade'. Archived from the original on 2007-04-30. Retrieved 2007-04-18.
  12. ^Verari Storage Blade
  13. ^http://www.techspot.com/news/26376-intel-endorses-industrystandard-blade-design.html TechSpot
  14. ^http://news.cnet.com/2100-1010_3-5072603.htmlCNETArchived 2011-12-26 at the Wayback Machine
  15. ^https://www.theregister.co.uk/2009/04/07/ssi_blade_specs/ The Register
  16. ^PICMG specificationsArchived 2007-01-09 at the Wayback Machine
  17. ^US patent 6411506, Christopher Hipp & David Kirkeby, 'High density web server chassis system and method', published 2002-06-25, issued 2002-06-25, assigned to RLX Technologies
  18. ^'RLX helps data centres with switch to blades'. ARN. October 8, 2001. Retrieved 2011-07-30.
  19. ^'HP Will Acquire RLX To Bolster Blades'. www.informationweek.com. October 3, 2005. Archived from the original on January 3, 2013. Retrieved 2009-07-24.
  20. ^'Worldwide Server Market Revenues Increase 12.1% in First Quarter as Market Demand Continues to Improve, According to IDC' (Press release). IDC. 2011-05-24. Archived from the original on 2011-05-26. Retrieved 2015-03-20.
  21. ^'Cisco Q1 Blade Server Sales Top HP In NA'.
  22. ^'Transitioning x86 to Lenovo'. IBM.com. Retrieved 27 September 2014.
  23. ^'Cisco Unleashes the Power of Virtualization with Industry's First Unified Computing System'. Press release. March 16, 2009. Archived from the original on March 21, 2009. Retrieved March 27, 2017.

External links[edit]

Wikimedia Commons has media related to Blade servers.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Blade_server&oldid=992997095'
A computer network diagram of clients communicating with a server via the Internet

Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.[1] Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests.Examples of computer applications that use the client-server model are email, network printing, and the World Wide Web.

Client and server role[edit]

The client-server characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services.Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service.

Server Slots Was Ist Das Erste

Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web servers and file server software at the same time to serve different data to clients making different kinds of requests. Client software can also communicate with server software within the same computer.[2] Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.

Client and server communication[edit]

In general, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the well-known application protocol, i.e. the content and the formatting of the data for the requested service.

Clients and servers exchange messages in a request–responsemessaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All client-server protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API).[3] The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange.[4]

A server may receive requests from many distinct clients in a short period of time. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates.Encryption should be applied if sensitive information is to be communicated between the client and the server.

Example[edit]

When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials may be stored in a database, and the web server accesses the database server as a client. An application server interprets the returned data by applying the bank's business logic, and provides the output to the web server. Finally, the webserver returns the result to the client web browser for display.

Slots

In each step of this sequence of client-server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer.

This example illustrates a design pattern applicable to the client–server model: separation of concerns.

Server Slots Was Ist Das Taipas

Early history[edit]

An early form of client-server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output.

While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5[5] and RFC 4.[6] This usage was continued at Xerox PARC in the mid-1970s.

One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL).[5] The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client-server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet).

Server Slots Was Ist Das

Client-host and server-host[edit]

Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and user-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client-server model, a server is more likely to be devoted to the task of serving.

An early use of the word client occurs in 'Separating Data from Function in a Distributed File System', a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client).[7] (By 1992, the word server had entered into general parlance.)[8][9]

Centralized computing[edit]

The client–server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large amount of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be.[10] It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a fat client, such as a personal computer, has many resources, and does not rely on a server for essential functions.

As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to fat clients.[11] This afforded greater, more individualized dominion over computer resources, but complicated information technology management.[10][12][13] During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s.[14]

Comparison with peer-to-peer architecture[edit]

In addition to the client–server model, distributed computing applications often use the peer-to-peer (P2P) application architecture.

Ist

In the client–server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine.[15][16]

In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client–server or client–queue–client network, peers communicate with each other directly.[citation needed] In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load.[citation needed] If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests.

Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.[17]

Slots

See also[edit]

Notes[edit]

  1. ^'Distributed Application Architecture'(PDF). Sun Microsystem. Archived from the original(PDF) on 6 April 2011. Retrieved 2009-06-16.
  2. ^The X Window System is one example.
  3. ^Benatallah, B.; Casati, F.; Toumani, F. (2004). 'Web service conversation modeling: A cornerstone for e-business automation'. IEEE Internet Computing. 8: 46–54. doi:10.1109/MIC.2004.1260703.
  4. ^Dustdar, S.; Schreiner, W. (2005). 'A survey on web services composition'(PDF). International Journal of Web and Grid Services. 1: 1. CiteSeerX10.1.1.139.4827. doi:10.1504/IJWGS.2005.007545.
  5. ^ abRulifson, Jeff (June 1969). DEL. IETF. doi:10.17487/RFC0005. RFC5. Retrieved 30 November 2013.
  6. ^Shapiro, Elmer B. (March 1969). Network Timetable. IETF. doi:10.17487/RFC0004. RFC4. Retrieved 30 November 2013.
  7. ^Sturgis, Howard E.; Mitchell, James George; Israel, Jay E. (1978). 'Separating Data from Function in a Distributed File System'. Xerox PARC.Cite journal requires journal= (help)
  8. ^Harper, Douglas. 'server'. Online Etymology Dictionary. Retrieved 30 November 2013.
  9. ^'Separating data from function in a distributed file system'. GetInfo. German National Library of Science and Technology. Archived from the original on 2 December 2013. Retrieved 29 November 2013.
  10. ^ abNieh, Jason; Yang, S. Jae; Novik, Naomi (2000). 'A Comparison of Thin-Client Computing Architectures'. Academic Commons. doi:10.7916/D8Z329VF. Retrieved 28 November 2018.
  11. ^d'Amore, M. J.; Oberst, D. J. (1983). 'Microcomputers and mainframes'. Proceedings of the 11th annual ACM SIGUCCS conference on User services - SIGUCCS '83. p. 7. doi:10.1145/800041.801417. ISBN978-0897911160.
  12. ^Tolia, Niraj; Andersen, David G.; Satyanarayanan, M. (March 2006). 'Quantifying Interactive User Experience on Thin Clients'(PDF). Computer. IEEE Computer Society. 39 (3): 46–52. doi:10.1109/mc.2006.101.
  13. ^Otey, Michael (22 March 2011). 'Is the Cloud Really Just the Return of Mainframe Computing?'. SQL Server Pro. Penton Media. Archived from the original on 3 December 2013. Retrieved 1 December 2013.
  14. ^Barros, A. P.; Dumas, M. (2006). 'The Rise of Web Service Ecosystems'. IT Professional. 8 (5): 31. doi:10.1109/MITP.2006.123.
  15. ^Cardellini, V.; Colajanni, M.; Yu, P.S. (1999). 'Dynamic load balancing on Web-server systems'. IEEE Internet Computing. Institute of Electrical and Electronics Engineers (IEEE). 3 (3): 28–39. doi:10.1109/4236.769420. ISSN1089-7801.
  16. ^'What Is Load Balancing? How Load Balancers Work'. NGINX. June 1, 2014. Retrieved January 21, 2020.
  17. ^Varma, Vasudeva (2009). '1: Software Architecture Primer'. Software Architecture: A Case Based Approach. Delhi: Pearson Education India. p. 29. ISBN9788131707494. Retrieved 2017-07-04. Distributed Peer-to-Peer Systems [...] This is a generic style of which popular styles are the client-server and master-slave styles.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Client–server_model&oldid=991273589'