My business is Franchises. Ratings. Success stories. Ideas. Work and education
Site search

Overview of data center engineering systems: modular data center (mcd): mobile? container? modular! Modular tsods. Modular data centers outdoor l

The basis for the emergence of this solution at one time was mobile data centers ("data centers in a container"), which appeared on the market about 3 years ago. The mobility of data centers - at first their most important characteristic - began to fade into the background, giving way to such factors as autonomy and speed of deployment. As a result, manufacturers began to abandon the term "mobility", moving to "modularity", and expand their solutions - scaling modular data centers to the size of standard data centers, not limited to a separate container.

Modular Data Center Limitations

One of the disadvantages of this solution is the difficulty of adapting to non-standard equipment (Hi-End arrays with non-standard form factors, with atypical cooling system requirements). Such equipment is hard to fit into the “bricks” of a modular data center with its engineering systems “sharpened” under the classical scheme. In this case, you have to design the appropriate systems specifically for the modular data center.

Sometimes customers follow the path of forming a small stationary data center for atypical server equipment. At the same time, standard servers “live” in a nearby modular data center. This equipment distribution scheme is more economical than building a conventional data center for all servers.

The fact that modular data centers “prefer” standard servers turns out to be a definite advantage for customers - this equipment is easy to manage.

The logical consequence is a reduction in operating costs - the cost of its maintenance.

It must be said that the volume of capital investments in the construction of a stationary data center will be lower than when buying a modular one. In the case of a modular data center, the customer purchases the walls, metal structures, and supporting parts in the aggregate. But it is necessary to think comprehensively and take into account the future: by reducing the costs associated with the engineering equipment of "empty" areas that would take place in a large-scale standard data center, you can reach the payback of a modular data center much faster - in 2-3 years the customer will in the plus.

There are restrictions on the construction of a modular data center: such a data center requires an indoor hangar or a place where it can be built. It is impossible to implement a "modular" scheme in a building with free rooms in its different parts.

Taking into account the features construction sites in Moscow (high cost of space, complex technical condition building structures), it is economically more profitable to build a modular data center outside the capital. In addition, it is more difficult to obtain the necessary electric power in Moscow than in the region.

If we talk about energy efficiency, the same “green” technologies can be successfully implemented in a modular data center as in a stationary data center. The only solution that cannot be used here due to its massiveness is the Kyoto Cooling wheel.

The emergence of technology for creating a data center (DC) based on a sea container in the distant 2007 did not attract much attention of experts: the solution seemed not just a niche, but rather an experiment that one of the leaders of the global IT market could afford.

However, the novelty turned out to be in demand, since it solved the problem of the rapid deployment of a small but highly reliable DC. Competitors appreciated the coup, which led to the emergence of a new class of DCs, the so-called. mobile DCs. Unlike traditional DCs, which are housed in conventional concrete and brick buildings, these solutions are delivered as finished products and can be installed and operated even outdoors.

The natural development of this direction was the emergence of a new class of data centers - modular data centers (Modular Datacentre, MDC). The technology of their creation allows you to quickly build and scale a deployed facility, and then increase it to the level of large DCs. According to experts, from the moment of signing the contract to the commissioning of a classic DC with an area of ​​250-300 sq. m leaves from 7-8 months to 1 year. For a similar modular DC, due to the fact that all its elements are typical, the time for design, delivery, installation and commissioning is reduced to 3-5 months.

Container and modular DCs have ceased to be a niche type of business; The spread of modular DCs, the increase in the number of not pilot, but commercially launched facilities is one of the significant trends in the global DC market in 2012.

From container to module

Considering the well-known definitions of a modular data center, several interpretations can be distinguished: the term “modular” (modular data center, containerized data center, containerized modular data centers) can hide systems or solutions that differ in functions and capabilities 1 .

At the first stage, the concept of modular DCs was traditionally tied to sea containers (ISO size) and was considered as a functionally complete DC placed in a standard metal case. However, the development of the concept of modularity has gradually gone beyond the concept of "container DC". At the same time, the latter have not yet lost their significance and remain in demand in various sectors of the economy.

Since a unified terminology has not been formed, it is not yet in the regulatory framework. The expert community considers several options for the concept of "modular DC", while all of them proceed from the assumption that such DCs are built on the basis of a container base. That is, according to the definition of IMS Research, a pre-assembled, completely closed mobile shell, in which ALL infrastructure subsystems of DC 2 are located.

As the concept developed, most experts began to lean towards a different view of modular DCs. This approach assumes the possibility of preliminary installation and testing of DC engineering and IT subsystems in a container, which is a metal structure in the form of a rectangular parallelepiped of arbitrary but standardized sizes by the vendor. An example is the HP-Flexible Data Center solution, in which the DC was formed from four functional blocks (containers) connected using a central node.

Another approach is based on the fact that any closed space, including a container (standardized or not), can be the base of a modular DC. The modularity of such a DC is due to the fact that all subsystems of the DC infrastructure are assembled from standard blocks, modules, pre-fabricated and tested at the factory. These blocks are joined both within one type and with all other types, which allows scaling both a specific subsystem and DC as a whole. This versatility applies to both hardware and software. In general, it can be assumed that not all modular DCs are located in a container, however, most container DCs belong to the class of modular DCs.

The emergence of containerized and then modular DCs gave impetus to the formation of a new paradigm for creating DCs, the so-called. Data Center 2.0. According to its authors, it proclaims a departure from the traditional DC construction option, which was based on the DC business model as a building, a leased premises. The new concept allows considering the DC as an element of IT, a kind of software and hardware complex, through which various services are offered to customers.

The economics of the issue

The advantages of modular data centers are significant: mobility in terms of delivery of the finished product, rapid deployment outside the production site, design outsourcing and potential tax savings.

According to 451 Research, the use of modular data centers in most cases reduces construction costs, and in this case, savings in capital costs will be from 10 to 30%. According to the calculations of Technoserv specialists, a modular DC costs 1.5-2 times cheaper than a leased or purchased site. However, if the customer has his own premises, in which it is not necessary to reinforce the floors, overhaul, other construction works, then the cost is comparable, or a modular solution will be 10-15 % more expensive.

According to Schneider Electric, the use of a modular approach provides a reduction in capital and operating costs by 2-7 dollars in terms of 1 W of DC power. Forrester's estimate agrees with these conclusions, according to which the traditional scheme for creating a DC with a capacity of 2 MW will cost about $127.4 million, while a DC implemented according to a modular scheme will cost $77.1 million 3 . On average, according to experts, in 2012, a modular DC per 1 MW cost the customer $6 million, excluding delivery and commissioning.

DCD intelligence company compared the costs of creating a DC with a capacity of 4 MW in the USA (rice. 1) . According to her calculations, the creation of a traditional DC will cost the customer 13-14% more than a modular version. The main advantage is achieved by reducing the cost of installation and commissioning of equipment. It should be noted that the hardware and software for a modular DC is significantly more expensive.

Fig.1. Cost comparison for a 4 MW DC in the US, USD prices as of 01/01/2013 Source: DCD Intelligence

The undoubted advantage of the modular DC is the factory assembly, which allows you to eliminate most of the technological defects before sending the module to the customer. At the same time, modular DCs have high information and physical security, and the monitoring systems supplied in the kit are already based on DCIM solutions.

Obviously, the concept of modularity implies the standardization of solutions: this creates the basis for improving the quality and reliability of the units and further reducing operating costs. In addition to the short deployment time, another advantage is its high scalability: the modular solution minimizes capital costs at the first stage, and in the future makes it possible to increase the capacity of the DC without significant alterations to the engineering infrastructure.

Energy saving

The advantages of modular DCs should also include high energy efficiency values ​​both for individual modules and for the modular DC as a whole. This is due to the fact that already at the design stage of such a data center, designers optimize the power consumption and heat dissipation of all its components, taking into account the design capabilities.

Practical measurements have shown that the use of a modular design provides a significant increase in energy efficiency and allows you to achieve significant cost savings. For example, in DC IO Phoenix, following the results of 2012, the PUE coefficient of the part where the raised floor is used was 1.73, while the average annual PUE of the modular part was 1.41. For the USA (Arizona) this translates into annual savings of $200,000/MW of IT equipment capacity 4 . Representatives of the company note that the modular part of the data center outperformed its raised floor counterpart in part due to the unique design of the modules, which provides airflow control much more efficiently than in the case of using hot and cold aisles for this purpose. Perhaps the only energy-efficient solution that cannot yet be used in modular DCs is the Kyoto Cooling wheel: its dimensions do not fit into the dimensions of known modules.

Use cases

Even taking into account the limitations and doubts discussed below, when introducing modular DCs, this solution is most likely in demand in two cases: if it is necessary to quickly replace an existing DC or build a new one in a limited area (reserve DC, DC in a remote branch of the company). Another area of ​​using modular DCs is the colocation service. Building a large-scale DC without being sure that its space will be in demand is a huge risk for the provider. Modular technology is flexible and allows you to minimize capital costs right at the start.

When placing a new DC, few of the designers assume that 2-3 years after it is put into commercial operation, a situation may arise that requires the DC to be moved to another district or even region. In the case of a stationary DC, his move risks turning into hell for the owners. In the case of a modular DC, disassembly/assembly and relocation will take a minimum of time, and the move can be optimized: inserting/removing the DC "modularly".

Modular DCs cope almost painlessly with such a problem as increased power consumption of equipment in racks and the need to increase heat dissipation: the design itself provides for a quick change in the parameters of the equipment being placed.

Problems of implementing modular data centers

In addition to the undoubted advantages, experts note a number of limitations in the construction of modular DCs: in some cases, their price negates the advantages compared to traditional DCs. In addition, many vendors do not have a unified approach to standardizing blocks and modules directly, which sometimes causes a return potential clients to the traditional solution. Nevertheless, IMS Research believes that the inevitable standardization of products and the growth of production volumes in the future will significantly reduce the cost of modular data centers.

Conservative thinking of the customer should be considered a certain barrier. For example, preventive and scheduled maintenance are clear to the customer. But how will the novelty behave in the event of a hardware failure? Obviously, if repair is necessary, the standard infrastructure will not cause difficulties, but the problems that may arise in the event of a repair in a modular data center can confuse the customer, especially since he is unlikely to be able to quickly eliminate the malfunction on his own. The result is the nasty issue of vendor dependency.

If we talk about large and very large DCs, for example, those owned by such giants as Microsoft, Google or Amazon, then here the capabilities and advantages of modular DCs make them the preferred option, both in terms of scalability and cost optimization. But if we are talking about a small DC being designed, for example, with a power consumption of 200-300 kW, then the choice in favor of an innovative solution is at least not obvious.

Another criterion is the area of ​​the DC under construction. According to experts, the profitability threshold for modular DCs is still at the level of 200-250 sq. m. Thus, both by the criterion of "capacity" and by the criterion of "area" we are talking about medium or large DCs, and for small DCs, direct construction is optimal.

Future owners of modular DCs should take into account the difficulty of adapting them to non-standard equipment, for example, Hi-End arrays with non-standard form factors and atypical cooling system requirements. In some cases, it is not easy to integrate such equipment into a modular DC: this may require additional clarification of the project. A palliative is known when standard servers are placed in a modular DC, and non-standard ones are installed in a DC built according to the classical scheme.

In 2012, the Uptime Institute conducted a survey among IT professionals, trying to find out the significant, in their opinion, the shortcomings of modular solutions. The largest number of respondents (35  %) reported that vendor offerings are not flexible enough and do not meet their requirements (rice. 2) . 33 more % believe that while modular data centers are too expensive. 32% of respondents note the short service life of such solutions, 30% indicate the novelty of the technology and the lack of sufficient implemented projects, and 27% are dissatisfied with the fact that solutions are “locked” by the manufacturer. In addition, 15% of respondents complain about a small selection of products, and 12% is not satisfied with the block size.

Rice. 2. Results of the Uptime Institute survey among IT professionals Source: Uptime Institute, 2012

The key issue of any project is the payback period. Most experts believe that the determining factors for the payback period are the business model and deployment conditions. The purpose of the modular data center also matters: will it be used as a commercial or as a departmental data center?

Modular Data Center Suppliers

The range of suppliers of modular DCs and their components is extremely wide. Here there are vendors for which modular data centers are the main type of business; suppliers of elements of the engineering infrastructure of classic DCs and IT solutions; as well as companies providing DC services.

The former include: AST Modular, BladeRoom Group Ltd, Cannon Technologies, COLT Technology Services, Datapod, Flexenclosure, Elliptical Mobile Solutions, IO Datacenters, MDC Stockholm, NxGen Modular and Silver Linings Systems.

The main players in the second segment are Schneider Electric and Emerson Network Power. Among the vendors that supply IT solutions to the market, the most famous are: Dell, HP, IBM, Cisco, SGI, Huawei, Google, Toshiba, Bull.

This review focuses on companies belonging to the first two groups (see table).

Company name / Country

Company specialization

Website address

Basic

product / module

Presence of a representative office in the Russian Federation Distributors / Dealers in the Russian Federation

AST Modular / Spain

Modular DC

BladeRoom Group Ltd / United Kingdom

Modular DC

BladeRoom System

Cannon Technologies / UK

Modular DC

Cannon T4 Modular Data Centers

COLT Technology Services / United Kingdom

DC service provider, supplier of modular DC

Flexenclosure / Sweden

Modular DC supplier

Elliptical Mobile Solutions / USA

Micromodular DC

Micro-Modular Data Center™

IO Data Centers / USA

Modular DC

MDC Stockholm / Sweden

Modular DC

NxGen Modular / USA

Modular DC

Silver Linings Systems / USA

Modular DC

Emerson Network Power / USA

DC infrastructure vendor

SmartRow, SmartAisle, SmartMod

Schneider Electric / France

DC infrastructure vendor

Data Center Module,

Facility Power Module,

Air Cooling Module,

Water Cooling Module

Rittal / Germany

DC infrastructure vendor

Table. List of the main players in the modular DC market, as of 01.10.2013 Source: company data

AST Modular

The Spanish engineering company produces a line of modular DCs in 10", 20", 40" and 53" ISO containers. Two versions are possible. The first option hosts the IT subsystem and all engineering infrastructure subsystems, including the fire extinguishing subsystem. For the second, two modules have been developed: IT Unit and Power Unit. 3-40 kW can be connected to the racks located in the IT Unit, up to 19 racks can be placed in a 40" container. The infrastructure allows for high reliability of the DC - up to the Tier IV level. The solution of this particular company was chosen by VimpelCom for the DC in Yaroslavl.

BladeRoom Group Ltd

The company supplies a modular data center under the BladeRoom system brand. Based on the module, it is possible to build a DC with an area from 600 to 60  000 sq. m, while the heat sink system provides input power from 1.0 to 24 kW/rack (air-cooled). BladeRoom system engineering infrastructure allows organizing DCs with different levels of reliability, Tier II-IV. In the latter case, two independent power inputs are used, UPSs are configured up to 2N, diesel generators - also up to 2N. The ventilation system maintains the temperature of the IT equipment in the module between 18-24°C. The company guarantees the order, delivery and commissioning of a modular DC (120 racks) in 12 weeks.

COLT Technology Services

The company from the UK is known as a DC service provider in Europe. For the implementation of modular data centers, it offers a solution under the Colt ftec data center brand. The solution includes modules Colt spaceflex, Colt powerflex, Colt coolflex. The area of ​​the IT module Colt spaceflex varies in the range of 125-500 square meters. m. The power supply module provides power up to 3 kW / sq. m. m, or up to 230 kW/row of racks at 25 kW/rack. The company's latest implementation was announced in July 2013 in the Netherlands. Here, on an area of ​​about 1000 sq. m additional modules of the company are installed, the input power is 1.6 MW, up to 20 kW/rack with a guaranteed PUE of 1.21.

Flexenclosure

Flexenclosure is a Swedish vendor that develops and manufactures pre-fabricated modular data centers, as well as elements of the electric power infrastructure (primarily for the telecommunications industry). The eCentre solution is a prefabricated modular data center for hosting and powering server and telecommunications equipment. It is optimized to improve energy efficiency and minimize total cost of ownership. The eCentre Modular DC includes power, cooling and security infrastructure.

Elliptical Mobile Solutions

Elliptical Mobile Solutions founded in 2005  and occupies a special position among the suppliers of modular data centers. The company specializes inn. micromodule level (rack level) and produces two main products: R.A.S.E.R. HD and R.A.S.E.R. dx. Both devices are functionally complete DCs. R.A.S.E.R. DX is a block in which 42 IT devices can be installed with a total power consumption of no more than 12 kW. R.A.S.E.R. The HD also accommodates 42 IT devices, but their total power consumption can be in the range of 20-80 kW, which is provided by a water cooling system. At the same time, the manufacturer announces PUE devices no higher than 1.1 (!).

IO Datacenters

IO Datacenters offers infrastructure modules and software for creating modular DCs. The IO.Anywhere product line has three types of modules: CORE, EDGE and ECO.

IO.Anywhere CORE includes three types of modules: up to 18 50U racks, up to 50 50U racks, and a power module that provides an information module with a power consumption of up to 600 kW. The package of delivery of all modules includes a software package for managing power distribution and heat removal.

IO.Anywhere EDGE includes two types of modules: up to seven 50U racks for solving IT tasks and an infrastructure module for hosting uninterruptible power, cooling and fire suppression subsystems.

MDC Stockholm

A Swedish company offers a solution based on functionally independent modules (server module, cooling module, uninterruptible power supply module, control module).

NxGen Modular

NxGen Modular was founded relatively recently, in 2009, but among its customers you can find both Microsoft and Apple (DC in Prineville, Oregon, USA). The company supplies both components of modular DCs and directly modular DCs. Among the main products of the company are DC in a container: up to 300 kW for IT systems; energy module; a cooling module and a module integrating power supply, cooling subsystems and cable telecommunication connections on a common platform.

Silver Lining Systems

The company supplies modules for DCs in two versions: a module standardized by SLS and a module based on an ISO container. The modules mate with power and cooling modules. In the first option, from 4 to 10 racks (45U) can be placed in one or two rows. The design of the module provides heat dissipation from the rack in the range of 7-35 kW. In the latter case, no more than four racks can be placed in the module.

The second option involves the use of ISO containers of standard sizes: 20, 40 or 53 feet. In the first case, the module consists of two containers, in the latter there are five. The module can accommodate from 8 to 50 racks. The design provides heat dissipation from the rack when power consumption is in the range of 7-28 kW. The modules are announced without an uninterruptible power supply system, as well as without a fire extinguishing system. Probably, the selection of devices for these subsystems of the engineering infrastructure will be performed by the project integrator.

Emerson Network Power

The company offers several products (SmartRow, SmartAisle and Smart-Mod) under the common Smart Solutions brand. SmartRow is a functionally completed open module of 3-6 IT racks, a cooling unit, a power supply unit and a fire extinguishing unit. SmartAisle is a complete open module with up to 40 double-row IT racks (20 X 2) for power ratings up to 10 kW/rack, cooling and power systems.

The modules are supplied with Liebert iCOM controls software and hardware complex. Avocent, Liebert Nform and Liebert SiteScan systems can be used to remotely monitor and control SmartAisle infrastructure elements.

Unlike SmartRow and SmartAisle, SmartMod is a group of containerized modules that allow you to install an IT system and cooling and power systems inside a container. It is possible that the power supply system is located in a separate module.

For the implementation of projects based on these products, the company offers the SmartDesign design suite. All components on which these solutions are based are also manufactured by Emerson Network Power.

Schneider Electric

The company has developed a set of modules and DC architecture based on them. Main modules: Data Center Module, Facility Power Module (500 kW), Air Cooling Module (400 kW) and Water Cooling Module (500 kW).

The segment of modular data centers is attracting the interest of an increasing number of suppliers of infocommunication products. For example, deliveries of modular data centers in 2013 were planned by NEC. Its developments will be used in large stationary DCs. In Russia, several firms have announced their products for this segment. The pioneer was the Sitronics company, which in 2010 launched Daterium container DCs on the market. Currently, it offers Daterium 2 and Daterium 3. In 2013, Technoserv presented its modular project IT Crew.

The introduction of modular DCs began at Russian market. Apart from major project Vimpelcom, which was mentioned above, in October 2013, Aeroflot, HP and Technoserv announced the completion of the project to create a backup data center for the air carrier. The solution includes a product of the company "Technoserv" - a modular data center "IT Crew".

Figures and facts

How big is the potential of the modular DC segment in the future? IMS Research experts, who use not the term modular data center, but containerized data centers (which includes modular, containerized, and mobile DCs) to define modular data centers, believe that deliveries of containerized data centers in 2013 will grow by 40  % compared to 2012. According to the report of this company, in 2012, compared to 2011, the volume of this segment of the DC market almost doubled.

Container and modular solutions appeared in 2005-2006, but IMS Research experts believe that in essence this market segment began to form only in 2011. IMS Research suggests that North America will be the largest market for modular DCs in 2012, but forecasts that shipments to China will double annually over the next five years due to the rapid growth of the DCs industry in the region and the need to rapidly deploy data centers. A number of experts, on the contrary, believe that the greatest success awaits modular data centers in the emerging markets of Asia and Africa.

There is less optimism among TechNavio experts. They suggest that in 2012-2016. in the US, the market for modular DCs will grow by 11.2 % 5 .

According to DCD intelligence, the largest relative growth in investment in modular DC technologies in 2011-2013. was in Russia and France (rice. 3) . Apparently, here it is worth talking about relative rather than large gross growth. Nevertheless, Russian IT companies in such a situation may try to create their own solutions - even based on imported components. DCD intelligence assumes that there is a certain interest in the discussed solutions in the BRIC countries.

Rice. 3. Relative growth of investments in modular DC technologies, 2011-2013

As always, we tried to make the model task as close as possible to real projects. So, a fictitious customer planned for 2015 to build a new corporate data center using traditional technology (starting with capital construction or building reconstruction). However, the deteriorating economic situation forced him to reconsider his plans. It was decided to abandon large-scale construction, and to solve current IT problems - to organize a small modular data center in the yard on its territory.

Initially, the data center is required to provide the installation of 10 racks with IT equipment with a capacity of up to 5 kW each. At the same time, the solution should be scalable in two ways: by the number of racks (in the future, with a favorable economic situation and the development of the customer's business, the data center should accommodate up to 30 racks) and by the power of a single rack (20% of all racks should support the operation of equipment with a power of up to 20 kW on the stand). The preferred "quantum" of building - 5 racks. For more details, see the "Task" sidebar.

TASK

The fictitious customer planned for 2015 to build a new corporate data center using traditional technology (starting with a major reconstruction of the building). However, the deteriorating economic situation forced him to reconsider his plans. It was decided to abandon the large-scale construction, and to solve current IT problems - to organize a small modular data center.

Task. The customer is going to deploy a small but highly scalable data center on its industrial territory. Initially, it is necessary to ensure the installation of 10 racks with IT equipment, the power of each rack is up to 5 kW. In this case, the solution must be scalable in two ways:

  • by the number of racks: in the future, with a favorable economic situation and the development of the customer's business, the data center will have to accommodate up to 30 racks;
  • in terms of individual rack power: 20% of all racks must provide operation of equipment with a power of up to 20 kW (per rack).

It is advisable to invest in the development of the data center as needed. The preferred "quantum" of building - 5 racks.

Site feature. The site is provided with electricity (maximum input power - 300 kW) and communication channels. The dimensions of the site are sufficient to accommodate any external equipment, such as diesel generator sets, chillers, etc. The site is located within the city limits, which imposes additional conditions on the noise level from the equipment. In addition, the location of the data center in the city implies a high level of air pollution.

Engineering systems. Fault tolerance level - Tier II.

Cooling. The customer did not specify a specific cooling technology (freon, chiller, adiabatic, natural cooling) - its choice is left to the discretion of the designers. The main thing is to ensure efficient heat removal at the capacities of racks specified in the task. Minimization of energy consumption is welcome - it is necessary to meet the restrictions on the supplied power.

Uninterrupted power supply. The specific technology is also not specified. Autonomous power supply time in the event of an accident - at least 10 minutes from batteries with the transition to work from a diesel generator set. The fuel supply is at the discretion of the designers.

Other engineering systems. The project should include the following systems:

  • automatic gas fire extinguishing installation (AUGPT);
  • access control and management system (ACS);
  • structured cabling system (SCS);
  • raised floor and other systems - at the discretion of the designers.

The data center must have integrated management system providing monitoring and control of all major systems: mechanical, electrical, fire, security systems, etc.

Additionally:

  • the customer asks to indicate the period for the implementation of the data center and the features of the delivery of equipment (access roads, special mechanisms for unloading / installation);
  • if the designer considers it necessary, you can immediately offer IT equipment for the data center.

We approached leading modular data center vendors with a request to develop projects for the customer. As a result, 9 detailed solutions were obtained. Before we start looking at them, let's note that the "Journal of Networking Solutions / LAN" has been tracking trends and trying to classify modular data centers for many years. For example, the author's articles "" (LAN, No. 07–08, 2011) and "" (LAN, No. 07, 2014) are devoted to these issues. In this article, we will not waste time on presenting trends, but we refer those who are interested to the indicated publications.

(NON) CONSTRUCTIVE APPROACH

Very often belonging to the category "Modular Data Center" is determined by the type of construct. At the first stage of the formation of this market segment, "modular" was usually called data centers based on standard ISO containers. But in the last year or two, a wave of criticism has fallen on container data centers, especially from solution providers. new wave”, which highlight the fact that standard containers are not optimized to house IT equipment.

It is wrong to classify data centers as modular based on the type of construct. “Modularity” should be determined by the possibility of flexible scaling through a consistent increase in the number of racks, the capacity of uninterruptible power supply systems, cooling and other systems. The structure may be different. So our customer received proposals based on standard ISO containers, specialized containers and/or blocks, modular rooms, and structures assembled on site (see table).

For example, Huawei specialists have chosen for our customer a solution based on standard ISO containers, although the company's portfolio of solutions includes modular data centers based on other types of constructs. Mikhail Salikov, director of the data center direction of this company, explained this choice by the fact that the traditional container solution, firstly, is cheaper, secondly, it allows the project to be implemented faster and, thirdly, it is less “demanding” for site preparation. In his opinion, in the current conditions, taking into account the conditions set by the customer, this option may turn out to be optimal.

Several proposals received by the customer at once are based on containers, but not standard ones (ISO), but specially designed for building modular data centers. Such containers can be docked to form single space machine room (see, for example, the project of Schneider Electric).

One feature of the container-based options is the more stringent requirements for the means of transport and installation. For example, to deliver NON-ISO25 container modules to Schneider Electric, a low platform trailer with a useful length of at least 7.7 m is required. A crane with a lifting capacity of at least 25 tons is required for unloading and installation (see Fig. 1).

When using smaller modules, the requirements are softer. For example, CommScope DCoD modules are transported in a standard eurotruck, and their unloading and installation can be carried out by a conventional forklift truck with a payload capacity of at least 10 tons (see Fig. 2).

Finally, for the installation of fast transport dima (on the spot) structures, for examplemeasures of the MTsOD NOTA of the Utileks company, heavy lifting mechanisms are not required at all - it is enough toloader with a lifting capacity of 1.5 tons.

All components of this WDC are translated are disassembled in a standard ISO container, which, according to Utilex representatives, significantly reduces the cost and delivery time to the placement site. This possibility is especially important for remote places without a developed transport infrastructure. As an example, the project for the implementation of a distributed data center in the Magadan region for Polyus Gold is given: the total length of the delivery route was more than 8900 km (5956 km by rail, 2518 km by sea, 500 km by road).

POWER SUPPLY

As part of the system guaranteed uninterruptible power supplychoice of different options forof the project is small - only the classic "trio": static UPS, batteries and a diesel generator set (DGU). It is not reasonable to consider dynamic UPS as an alternative at such capacities.

Most projects use modular UPSs that are scaled up by adding power modules. Monoblock UPSs are also offered - in this case, the system power is increased by installing the UPS in parallel. Some companies indicated specific UPS modules, others limited themselves to general recommendations ().

The situation is similar with DGU: some companies recommended specific models, others limited themselves to calculating the powerness. More detailed informationfor DSU is presented in full descriptionniyah projects available on the site .

COOLING

But there are many options for cooling. Most of of projects is based on freon air conditioners - not the most energy efficient, but the cheapest solution. Two companies - Huawei and Schneider Electric - have relied on chiller systems. As Denis Sharapov, business development manager at Schneider Electric, explains, a freon air conditioner refrigeration system project is cheaper, but to ensure fault-tolerant operation, it will be necessary to install a much larger UPS (to power compressors), as a result, the cost of projects will be approximately equal. Therefore, based on the calculations, the Schneider specialist opted for the chiller option - more reliable and functional. (For emergency cooling in chiller systems, a cold water storage tank is used, and therefore the need for uninterruptible power is much lower - it is enough to power only the pumps for pumping the coolant from the UPS.)

The CommScope solution recommended to the customer by Croc specialists uses direct free-cooling with adiabatic cooling. According to Alexander Lasy, Technical Director of the Smart Buildings Department at Krok, the use of direct free-cooling is almost all year round provides significant energy savings, and the proposed (in the CommScope solution) adiabatic cooling system does not require serious water treatment, since humidification occurs using inexpensive replaceable elements. The cost of their replacement is not comparable with the cost of deep water treatment. For additional cooling and backup cooling of the data center in this case, according to the Croc specialist, it is most expedient to use a direct evaporation freon system (DX). In the presence of freecooling and adiabatic cooling, the total time of use of the backup/additional system during the year is extremely small, so there is no point in striving for its high energy efficiency.

The LANIT-Integration company proposed an indirect free-cooling system with adiabatic cooling. In such a system, there is no mixing of external and internal air flows, which is especially important when placing a data center in a city due to high air pollution. The chiller system was chosen as an additional one in this project - it will be turned on only in the most unfavorable periods for the main system.

MISSION IMPOSSIBLE?

There was a hidden catch in our task, which most of the contestants did not notice (or rather did not want to notice). Customer-specified total total IT equipment power (240kW) and maximum input power (300kW) leave only 60kW for engineering and other support systems. The fulfillment of this condition requires the use of very energy efficient engineering systems: PUE = 1.25 (300/240).

Theoretically, the achievement of such an indicator is quite possible using direct free-cooling and adiabatic cooling, proposed by Croc and CommScope. And the latter has examples of projects abroad, where an even lower PUE value is achieved. But the effectiveness of these cooling technologies is highly dependent on the level of pollution and climatic parameters in the location. In the task set, sufficient data were not provided, therefore, at this stage, it cannot be unequivocally stated whether direct freecooling will help to “meet” the limitation on the input power.

In the solution with indirect freecooling proposed by LANIT-Integration, the calculated PUE is 1.25–1.45. Thus, this project does not fit within the given constraints. As for solutions with chiller cooling systems and freon air conditioners, their energy efficiency is even lower, which means that the overall data center consumption is higher.

For example, according to the calculation of Denis Sharapov, the peak power consumption of the proposed Schneider Electric solution is 429.58 kW - at maximum IT load (240 kW), ambient temperature above 15 ° C, at the time of charging the UPS batteries, taking into account all consumers (including including monitoring systems, AGPT, internal lighting, ACS). This is 129 kW more than the supplied power. As a solution to the problem, he proposed to exclude highly loaded racks (20 kW) from the configuration or to ensure a regular supply of fuel to ensure the continuous operation of a diesel generator set with a capacity of at least 150 kW.

Of course, running a data center at 100% load is very unlikely - in practice, this almost never happens. Therefore, in order to determine the total power consumption of the MDC, GreenMDC experts proposed to take the demand factor for the total power of IT equipment equal to 0.7. (According to their data, for corporate data centers, the value of this coefficient is in the range from 0.6 to 0.9.) Given this assumption, the calculated power for the turbine hall will be 168 kW (24 cabinets of 5 kW; 6 cabinets of 20 kW; demand factor - 0.7: (24x5 + 6x20) x0.7 = 168).

CORRECTION OF THE TASK

But even taking into account the above remark about incomplete loading, in order to provide reliable power supply to the data center, the customer, apparently, will have to decide whether to increase the power supplied to the site and abandon the dream of creating a data center with a PUE value at the level of the world's best indicators of Facebook and Google scale objects. Yes, and such energy efficiency is not needed by an ordinary corporate customer - especially given the low cost of electricity.

According to Alexander Lasy, since the capacity of the data center is very small, then it does not make much sense to pay increased attention to energy efficiency, since the cost of solutions that can significantly reduce electricity consumption can significantly exceed the amount of savings over the period life cycle DPC.

So, in order for the customer not to be left without a data center at all, we remove the limitation on the supplied power. What other adjustments are possible? Note that many companies scrupulously fulfilled the customer's wishes regarding the number / capacity of racks and the “quantum” of building up to 5 racks. And the Croc and CommScope companies offered even a smaller “quantum” - modules for 4 racks.

However, a number of companies, based on the standard designs available in their portfolio of proposals, somewhat deviated from the conditions of the problem. For example, as Alexander Perevedentsev, chief specialist of the sales support department at Technoserv, notes, “according to our experience on this moment development (scaling) of data centers by clusters with a step of 15–20 racks with an average power per rack of at least 10 kW is relevant.” Therefore, Technoserv offered a solution for 36 racks of 10 kW of electric power per rack with a step of building up 18 racks. Containers for 18 racks also appear in the Huawei project, so that in the end the customer can get 6 racks more than requested.

SITE PREPARATION

The site for the installation of the data center should be prepared - at least leveled. For the data center offered to the customer by Croc, it is necessary to have a concrete foundation in the form of a bearing slab, designed for the corresponding load.

In addition, the site must be equipped with low tides for draining rain and melt water. “Since there is sometimes quite a lot of snow in our country, in order to avoid breakages and leaks, it is advisable to equip the site with easy-to-assemble shed sheds,” recommends Alexander Lasy.

Construction on the site of a concrete foundation is provided for in most projects. At the same time, a number of experts pointed out that the cost of building a foundation is not comparable to the total cost of the project, so it is not worth saving on it.

The NOTA MTsOD of the Utileks company stands apart, for which a foundation is not required. Such a data center can be installed and run on any level ground with a slope of no more than 1–2%. Details are below.

TIME…

As Alexander Lasy notes, the most important and time-consuming process is the design of a data center, since mistakes made at the design stage are extremely difficult to eliminate after the modules are released from production. According to him, it usually takes from 2 to 4 months to prepare and approve the TOR, design, agree and approve the project. If the customer chooses standard solutions, then the process can be reduced to 1-2 months.

Should the customer opt for the CommScope solution, production and delivery of pre-assembled modules and all associated hardware will take 10-12 weeks for starter kits and 6-8 weeks for add-on modules. Assembly of a starter kit for a line of 5 modules on a prepared site - no more than 4-5 days, commissioning - 1-2 weeks. Thus, after the agreement and approval of the project, the conclusion of the contract and the payment of the required advance payment (usually 50–70%), the data center will be ready for operation in 12–14 weeks.

The project implementation period based on the LANIT-Integration solution is about 20 weeks. At the same time, it will take about 4 weeks for design, 8 weeks for production (including cooling systems), 4 weeks for delivery, and another 4 weeks for installation and start-up.

Similar terms are indicated by other companies, and there is no big difference where the assembly facilities are located - abroad or in Russia. Let's say the average turnkey delivery time for the Schneider Electric MDC is 18–20 weeks (without design). The duration of the main stages (related to the installation of the first and subsequent modules and their delivery on a turnkey basis) during the construction of the Utilex NOTA MDC is from 12 weeks. And the duration of the stages associated with the expansion of each module (adding 5 racks and corresponding engineering systems to it) is from 8 weeks.

Obviously, these dates are approximate. A lot depends on geographical location customer site, quality of work organization and interaction of all process participants (customer, integrator and manufacturer). But in any case, from the idea to the full implementation of the project takes only six months or a little more. This is 2–3 times less than when building a data center using traditional methods (with capital construction or building reconstruction).

…AND MONEY

Information about the cost, we have not received from all the participants. Nevertheless, the data obtained allow us to get an idea of ​​the structure and approximate amount of costs.

Oleksandr Lasyi described in detail to the customer the structure and sequence of his costs when choosing a solution from Croc and CommScope. So, 10% will go to design, 40% - to launch the starter kit (12 IT racks), 5% - to complete the first line (expansion + 4 IT racks), 35% - to launch the second line starter kit (8 IT -racks), 5% - for expansion (+4 IT racks) and another 5% - for further expansion (+4 IT racks). As you can see, the distribution of costs is even - this is exactly what the customer wanted when he thought about choosing a modular data center.

Many customers believe that after the recent fluctuations in the ruble exchange rate, localization of the proposed solutions is important in reducing and / or fixing the cost. According to Alexander Perevedentsev, Technoserv employees have been actively searching for manufacturers of domestic equipment for the past six months, in particular in the field of engineering systems. At the moment, when implementing a data center, the degree of localization of engineering systems is 30%, and in high-availability solutions, which include latest version modular data center "IT Crew", - not less than 40%. The cost of the IT Crew data center for 36 racks with a capacity of 10 kW each with the possibility of further expansion and Tier III fault tolerance will vary from $65,000 to $100,000 per rack. When converted at the rate of 55 rubles. per dollar it turns out from 3.5 to 5.5 million rubles. behind the counter. But this, we emphasize, is a Tier III solution, while most other companies offered a Tier II level, as requested by the customer.

If Technoserv did not fail to boast of a localization level of 40%, then the representatives of Utileks modestly kept silent about the fact that in their solution this figure is significantly higher, since all the main engineering equipment, including air conditioners and UPS, is of domestic production. The cost of the Utilex MODC NOTA (based on 24 racks of 5 kW and 6 racks of 20 kW) is $36,400 per rack. At the rate of 55 rubles. for 1 dollar it turns out about 2 million rubles. behind the counter. This amount includes all costs of delivery to the location within the Russian Federation and the cost of all installation and launch work, including travel and overhead costs. As representatives of Utilex note, the total cost of the MDC can be reduced by placing the entire infrastructure in one large NOTA module (5.4x16.8x3 m): 10 racks at the first stage and adding 5 racks at subsequent stages. But in this case, capital expenditures at the first stage will need to be increased, and if the customer refuses to scale the IDC, the funds will be spent inefficiently.

Dmitry Stepanov, director of the Energy Center business area at President-Neva, estimated the cost of one module (container) for 10–12 racks at 15–20 million rubles. This amount includes the cost of all engineering systems included in the module, including air conditioners and UPS, but does not include the cost of DGS. According to him, when choosing cheaper engineering components instead of UPSs and air conditioners from Emerson Network Power, the cost can be reduced to 11-12 million rubles.

The cost of the GreenMDC project at the first stage is approximately 790 thousand euros (preparation of the site - 130 thousand euros, the first stage of the WDC - 660 thousand euros). The total cost of the entire project for 32 racks, including DGU, is 1.02 million euros. At the exchange rate at the time of preparation of the material (59 rubles per euro), it turns out 1.88 million rubles in terms of one rack. This is one of the most profitable options.

In general, if we talk about a solution based on freon air conditioners, the level of 2 million rubles per rack can serve as a “reference point” for the customer. The option based on the chiller system is about 1.5 times more expensive. Moreover, the difference in the cost of “Russian” and “imported” solutions is unlikely to be very different, since even domestic air conditioners (“Utileks”) use imported components. Unfortunately, modern compressors are not produced in Russia, and nothing can be done about it!

FEATURES OF THE PROPOSED SOLUTIONS

"GrandMotors"

The main - IT - module ("Machine Hall") is formed on the basis of an all-metal container block and consists of an equipment compartment and a vestibule. To ensure a given "quantum" of growth in the hardware compartment of each IT module, 4 racks with a capacity of up to 5 kW (cooling by in-line air conditioners) and 1 rack with a capacity of up to 20 kW (cooling by a similar air conditioner directly attached to it) are installed (see Fig. 3). The UPS with batteries is located in the same compartment. As air conditioners, the customer is offered (to choose from) Emerson CRV or RC Group COOLSIDE EVO CW units, and as UPS - GMUPS Action Multi or ABB Newave UPScale DPA series devices.

The vestibule houses electrical panels and equipment for the gas fire extinguishing system. If necessary, it is possible to organize a workplace for a dispatcher or duty personnel in the vestibule.

As Daniil Kulakov, technical director of GrandMotors, notes, the use of in-line air conditioners is dictated by the presence of racks of various types in terms of power. In the case of racks of the same capacity, in order to increase the efficiency of filling the machine room area, it is possible to use outdoor air conditioners (for example, Stulz Wall-Air) - this solution will reduce the energy consumption of the cooling system through the use of free-cooling mode.

At the first stage, it is enough to mount two "Engine Hall" modules and one "Power Module" on site - the latter houses the GMGen Power Systems DGU of SDMO Industries for 440 kVA. Subsequently, it is possible to increase the available computing power of the data center with a discreteness of one "Machine Room" module (5 racks), while the installed power of the DGU is designed for 6 such modules. The monitoring and control system provides access to each piece of equipment and is implemented on the top level on the basis of a SCADA system.

All envisaged engineering solutions provide N+1 redundancy (with the exception of DGS - if necessary, a backup generator can be installed). The proposed architecture for building engineering systems allows you to increase the level of redundancy already during the creation of a data center.

Croc/CommScope

CommScope, the world's leading provider of cabling infrastructure solutions, is relatively new to the modular data center market, allowing it to address many of the shortcomings of first-generation products. She proposed building data centers from prefabricated high-availability modules, calling her solution Data Center On Demand (DCoD). These solutions have already been installed and successfully used in the USA, Finland, South Africa and other countries. In Russia, with the participation of Croc, three DCoD projects are currently being developed.

DCoD base modules are designed to fit 1, 4, 10, 20, or 30 racks of IT equipment. Alexander Lasy, Technical Director of the Krok Smart Buildings Department, who presented the project to our customer, suggested assembling the solution from standard DCU-4 modules designed for 4 racks (you can also order non-standard ones, but this will increase the cost of the project by about 20%). Such a module includes cooling systems (including automation for control), primary distribution and power switching.

The starting block will consist of 4 typical modules (see Fig. 4). Its total capacity is 16 racks, which will allow 6 redundant rack spaces to accommodate a UPS of the required power and battery, as well as equipment for security and monitoring systems. The final configuration of the "mashroom" (line) will consist of 5 modules with 4 racks, that is, a total of 20 racks, 2 of which can have a total capacity of IT equipment up to 20 kW. At the same time, 1 module (4 racks) will be allocated for an uninterruptible power supply system (UPS) and can be isolated from the main turbine hall by a partition to restrict access for maintenance personnel.

The second line is completely similar in design to the first, but its starter kit does not consist of 4, but of 3 modules for 4 racks. It will immediately allocate space for the UPS, security systems and monitoring. However, as Alexander Lasy notes, as a result of the accumulation of operating experience of the first line, adjustments may be made to the initial configuration plan.

The power of IT equipment of the first line in the initial installation (12 racks) will be about 60 kW, and after the expansion (16 racks) - 90 kW, including the channel-forming equipment of providers and the main switches of the data center. To ensure the operability of the data center in the event of an external power failure, it is necessary to provide uninterrupted power supply to the fans and controllers of the cooling system. The power of this equipment will be about 10 more kW. In total, the power of the UPS will exceed 100 kW. A modular UPS for such a load with an autonomy time of at least 10 minutes and a power switching cabinet will occupy approximately 4 rack spaces.

As mentioned above, the highlight of the proposed solution is to use for direct free-cooling with adiabatic humidification. A freon system (DX) is proposed for additional cooling and backup cooling.

"LANIT-Integration"

The customer was offered a solution based on the modular physical protection room SME E-Module (see Fig. 5): from standard components, protected rooms of any geometry for a data center with an area from 15 to 1000 m 2 can be quickly erected. For our task, the area of ​​​​the room is at least 75 m 2. Among the design features are a high level of physical protection from external influences (due to the steel supporting frame of the beam-column type and reinforced structural panels), fire resistance (60 min according to EN1047-2) and dust and moisture protection (IP65). Good thermal insulation properties allow you to optimize the cost of cooling / heating.

The SME FAC (Fresh Air Cooling) indirect free-cooling system with adiabatic humidification was chosen for IT equipment cooling. In such a system, there is no mixing of external and internal air flows. SME FAC blocks were selected for the project, allowing to remove up to 50 kW of heat each (for 10 racks of 5 kW each). They will be added as needed - with an increase in the number of racks, as well as with an increase in the load of individual racks up to 20 kW.

Around the E-Module MPFZ, it is necessary to provide a zone for the installation and maintenance of SME FAC cooling units with a perimeter width of at least 3 m. In addition, the project provides for a chiller system, but it will be activated only during the most unfavorable periods for the main system. According to LANIT-Integration, the data center will be able to operate without turning on chillers up to 93% of the time per year (at an outside air temperature of up to +220C).

The distribution of cold air will be carried out under the raised floor. If necessary, the MDC can be equipped with the SME Eficube cold row containerization system.

The UPS is implemented on the basis of monoblock Galaxy 5500 UPSs from Schneider Electric. At the first stage, 2 Galaxy 5500 sources of 80 kVA will be installed. Subsequently, the required number of UPSs will be added to the parallel. There can be up to 6 such modules in a parallel system, respectively, it is capable of supporting a load of 400 kVA in N + 1 mode.

Solution to ensure control and safety environmental safety is also built on the basis of the Schneider Electric product - the InfraStruXure Central system.

The offer of LANIT-Integration looks like one of the most solid, especially in terms of the level of security of the premises and the composition of the cooling system. However, a number of points - for example, the proposal to deploy a complete chiller system that will be used only a small part of the time of the year - suggest a high cost for this project. Unfortunately, no cost information was provided to us.

"President-Neva"

The company offered the customer a solution based on a block container of the Sever class. Thanks to this design, the solution can operate at temperatures down to -50°C, which is confirmed by the positive experience of operating three Gazpromneft-Khantos MDCs at Priobskoye field in the region of the Far North (in total, President-Neva has 22 built MDCs).

The container has transverse partitions dividing it into three rooms: a vestibule and two rooms for cabinets with IT equipment, as well as a partition between the cold and hot aisles inside the IT rooms (see Figure 6). In one room, 8 racks of 5 kW will be installed, in the other - 2 high-load racks of 18 kW. Rolling doors are installed between the rooms, when opened, single cold and hot corridors are provided in the equipment compartment.

The engineering "stuffing" of the data center is Emerson Network Power equipment. Free-cooling Liebert HPSC14 ceiling mounted units (14.6 kW cooling capacity) are used to cool the room with 5 kW racks. It houses four such air conditioners. Another Liebert HPSC14 is installed in a room with high-load racks and UPS (UPS and battery). But it serves only as a closer, and the main load is removed by more powerful Liebert CRV СR035RA in-line air conditioners. Thanks to the emergency freecooling function, the Liebert HPSC14 is able to provide cooling for that short period of time when the main power is turned off when the genset is started.

The UPS consists of a modular Emerson APM series UPS (N+1 redundancy) and a battery pack. An external bypass of the UPS is provided in the input-distribution device (ASU) located in the vestibule. The FG Wilson unit with a capacity of 275 kVA is proposed as a DGU.

A customer can start their data center development with a 5kW rack solution, and then, when needed, the container can be upgraded with the CRV air conditioners and UPS modules required to run high-load racks. When the first container is filled, the second one will be installed, then the third one.

Although in this decision the customer was offered Emerson Network Power engineering systems, as Dmitry Stepanov emphasizes, President-Neva's MDCs are also implemented using equipment from other manufacturers - in particular, companies such as Eaton, Legrand and Schneider Electric. This takes into account the corporate standards of the customer for engineering subsystems, as well as all technical requirements of the manufacturer.

"Technoserv"

Modular data center "IT Crew" is a complex solution based on pre-fabricated structures with pre-installed (at the factory) engineering systems. One data center module includes a server unit for installing IT equipment, as well as an engineering unit to ensure its uninterrupted operation. Blocks can be installed side by side at the same level (horizontal topology) or on top of each other (vertical topology): at the bottom is an engineering block, at the top is a server block. For our customer, a vertical topology was chosen.

Two options for the implementation of the MDC "IT Crew" are presented for consideration by the customer: "Standard" and "Optimum" configurations. Their main difference is in the air conditioning system: in the "Standard" version, a precision air conditioning system using chiller water is implemented, in the "Optimum" - a system using a freon coolant.

The server block is the same. Up to 18 server racks can be installed in one block to accommodate active equipment. The computer room is proposed to be formed from 2 server blocks. Server blocks and a corridor block are located on the second floor and are combined into a single technological space (computer room) by dismantling partitions (see Fig. 7).

Air-conditioners (water-cooled) are installed in the space under the raised floor of the server block of the Standard MDC. In MDC Optimum, IT equipment is cooled using precision air conditioners located in the engineering block. Air conditioning systems are reserved according to the 2N scheme.

The engineering block is divided into 3 compartments: for the installation of the UPS; refrigeration facilities and (optionally) a dry-cooler. The UPS compartment contains a cabinet with a UPS, batteries and switchboard equipment. UPS has N+1 redundancy.

Two chillers, a hydraulic module, and storage tanks are installed in the refrigeration section of the Standard MDC. The implemented cooling system makes it possible to achieve a redundancy level of 2N within one data center module and 3N within two adjacent data center modules - by combining the cooling system circuits. Free-cooling function is implemented in the refrigeration system, which allows to significantly reduce operating costs and increase the service life of refrigeration machines.

It has already been mentioned above about the high (40 percent) level of localization of engineering systems of the MDC "IT Crew". However, representatives of Technoserv did not disclose information about which manufacturer's air conditioners and UPSs they use. Apparently, the customer will be able to find out this secret in the course of a more detailed study of the project.

Utileks

The NOTA modular data center is based on a pre-fabricated, easily dismantled frame structure, which is transported in a standard ISO container. It is based on a metal frame, which is assembled at the installation site from prefabricated elements. The base consists of sealed load-bearing panels that are mounted on a flat platform made of thermal insulation boards"Penoplex", laid in 3 layers in a checkerboard pattern. No foundation required! The walls and roof of the MDC are made of sandwich panels. To install equipment in the data center, a raised floor is installed.

The project to create a data center is proposed to be implemented in 5 stages. At the first stage, the WDC infrastructure is created to accommodate 10 racks with an average consumption of 5 kW per rack and the possibility of increasing the consumption of 2 racks up to 20 kW each (see Fig. 8). The UPS is implemented on the basis of a modular UPS manufactured by Svyaz Engineering and batteries that provide autonomous operation for 15 minutes (taking into account the consumption of process equipment - air conditioners, lighting, access control systems, etc.). If you need to support a 20 kW IT load in 2 racks, power modules and additional batteries are added to the UPS.

The air conditioning system is implemented on the basis of 4 precision in-line Clever Breeze air conditioners manufactured by Utileks with a cooling capacity of 30 kW each. External blocks are located on the end wall of the MDC. The controllers included in the Clever Breeze air conditioners serve as the basis for a remote monitoring and control system. The latter provides monitoring of the temperature of the cold and hot aisles, the status of the UPS and the humidity in the room; analyzes the condition and cooling capacity of air conditioners; performs the functions of a control and management system for access to the premises.

At the second stage, the second MDC module is installed for 5 racks with a load of 5 kW (in this and subsequent expansion "quanta" it is possible to increase the load per rack up to 20 kW). At the same time, the module area has a reserve for accommodating 5 more racks (see Fig. 9), which will be installed at the third stage. The fourth and fifth stages of data center development are similar to the second and third. At all stages, the elements of the engineering infrastructure discussed above (UPS, air conditioners) are used, which are increased as needed. The DGU is installed immediately for the entire planned capacity of the data center - Utilex specialists chose the WattStream WS560-DM (560 kVA / 448 kW) unit, located in its own climate container.

According to Utilex specialists, the NOTA MDC infrastructure allows increasing the area of ​​modules as the number of racks increases by adding the required number of carrier panels to the base. This makes it possible to evenly distribute the capital costs for the creation of modules by stages. But in this option, the creation of a smaller module (with dimensions of 5.4x4.2x3 m) is not economically feasible, since a small decrease in capital costs for the module will be “compensated” by the need to invest twice in the fire extinguishing system (at the second and third stages).

An important advantage of the Utilex NOTA data center in terms of focusing on import substitution is the production of most data center subsystems in Russia. The stand-alone infrastructure of the NOTA WDC, racks, air-conditioning systems, remote monitoring and control systems are produced by Utileks itself. SBE - by Svyaz Engineering. Automatic gas fire extinguishing system - by the Pozhtekhnika company. The computing and network infrastructure of the data center can also be implemented based on solutions Russian production- ETegro Technologies.

GreenMDC

The customer was offered a Telecom Outdoor NG modular data center, consisting of Telecom modules (designed to accommodate racks with IT equipment), Cooling (cooling systems) and Energy (SBU, electrical panels). All modules are manufactured at GreenMDC, and before being sent to the customer, the MDC is assembled at a test site, where full testing is carried out. Then the WDC is disassembled and the modules are transported to their destination. Separate modules are metal structures with walls installed and prepared for transportation (see Fig. 10).

The heat removal system from the equipment is implemented on the basis of HiRef precision cabinet air conditioners with cold air supply under the raised floor. SBE - on modular UPS Delta. The electrical distribution network is made with redundant busbars - the racks are connected by installing branch boxes with automatic switches on the busbars.

As part of the first stage of the project, site preparation, installation of a diesel generator set, as well as 3 data center modules (see Fig. 10): Telecom module for 16 racks (10 racks of 5 kW each, 6 rack spaces remain unfilled); Cooling module with 2 x 60 kW air conditioners and Energy module with 1 modular UPS. The UPS consists of a 200 kVA chassis and 3 modules of 25 kW each (for the Delta DPH 200 UPS, the values ​​in kilowatts and kilovolt-amperes are identical). To ensure the uninterrupted operation of engineering systems, 1 modular UPS is installed: a chassis for 80 kVA and 2 modules for 20 kW.

At the second stage, the expansion is carried out within the framework of the existing WDC modules: 6 more racks are added to the Telecom module. The increase in electrical capacity is achieved by adding power modules to the modular UPS, and the cooling capacity is achieved by increasing the number of air conditioners. All communications (freon lines, electrics) for air conditioners are laid at the first stage. If it is necessary to install a rack with high consumption, a similar algorithm for increasing power is implemented. In addition, isolation of the cold aisle is practiced and, if necessary, the installation of active raised floor tiles.

At the third stage, the second Telecom module is installed, which will allow placing up to 32 racks in the MDC. The increase in the capacity of engineering systems occurs in the same way as it was done at the second stage - up to the maximum design capacity of the MDC. The MDC extension operation is performed by the supplier's employees without interrupting the functioning of the installed modules. The term of production and installation of the additional Telecom module is 10 weeks. Module installation - no more than one week.

Huawei

The Huawei IDS1000 mobile data center consists of several standard 40-foot containers: one or more containers for IT equipment, a UPS container, a container for a cold supply system. If necessary, a container for duty personnel and a warehouse is also supplied.

For our project, the option of an IT container for 18 racks (6 rows, each with 3 racks) was chosen - see fig. 11. At the first stage, it is proposed to place 10 racks for IT equipment, 7 row air conditioners, power distribution cabinets, AGPT cylinders and other auxiliary equipment. At the second and subsequent stages - add 8 racks and 5 row air conditioners to the first container, install the second container and place the required number of racks with air conditioners in it.

The maximum power of one IT container is 180 or 270 kW, which means that up to 10 and 15 kW can be diverted from each rack, respectively. Huawei's proposal does not specifically address the placement of 20kW racks, but since the allowable power per rack is higher than the customer's requested 5kW, the load can be redistributed.

The refrigeration supply of in-line air conditioners is carried out from a system of chillers located on the frame of a separate container. In addition, the UPS (UPS and battery) is placed in a separate container. The UPS is configurable in 40 kW increments (maximum 400 kVA) depending on the calculated load.

As part of the solution, Huawei included a new generation NetEco control system that allows you to control the operation of all major systems: power supply (DGU, UPS, PDU, battery, switchboards and ATS), refrigeration (air conditioners), security (ACS, means video surveillance). In addition, it is able to monitor the status environment using various sensors: smoke, temperature, humidity and water leakage.

The main feature and advantage of Huawei's offer is that all the main components, including air conditioners and UPS, are from one manufacturer - Huawei itself.

Schneider Electric

The proposed MDC is based on standard NON-ISO25 all-weather modules with high fire-resistant characteristics (EI90 according to EN-1047-2). Access is controlled by an access control system with a biometric sensor. The modules are delivered to the installation site independently, then joined, forming a single construct.

The first stage is the most voluminous and costly stage, at which the base is leveled and strengthened (screed is poured), two chillers are installed (Uniflair ERAF 1022A units with free-cooling function), a diesel generator set, a power module and one IT module (10 racks), the entire pipe wiring. The power module and the IT module form three rooms: the power center, the machine room, the entrance lock/lobby (see Fig. 12).

The APC Symmetra 250/500 UPS (with a set of batteries and bypass), switchboard equipment, a gas fire extinguishing system, main and emergency lighting, a cooling system (2 (N + 1) SE AST HCX CW 20kW air conditioners) are installed in the power module room. The UPS powers IT equipment, air conditioners and pumps. The first stage IT module comes with 10 pre-installed racks (AR3100), PDU, busbar, hot aisle containment system and cooling system based on SE AST HCX CW 40kW (2+1). These air conditioners are specially designed for modular data centers: they are mounted above racks and, unlike, say, in-line air conditioners, do not occupy useful place in the IT module.

At the second stage, the second one is docked to the first IT module, which is supplied with 5 racks, and otherwise is completed in the same way as the first one (only at this stage 2 (1 + 1) of 3 air conditioners are activated). In addition, another chiller is installed, and the modular UPS is retrofitted with the necessary set of batteries and inverters. The third stage of data center development comes down to adding 5 racks to the second IT module and activating the third air conditioner. The fourth and fifth stages are similar to the second and third.

The StruxureWare complex of the DCIM class is proposed as a control system.

As a result, the customer will receive a solution that fully meets his requirements from one of the world leaders in the field of modular data centers. At this stage, Schneider Electric has not provided information on the cost of the solution.

CONCLUSION

The main design features proposed data centers, as well as their cooling and power systems. These systems are presented in more detail in the project descriptions available on the website. It also provides information on other systems, including gas fire extinguishing equipment, an access control and management system (ACS), an integrated control system, etc.

The customer received 9 projects, mostly satisfying his requirements. The presented descriptions and characteristics of the products convinced him of the main thing: modern modular solutions allow you to get a fully functional data center, while the project implementation time is several times shorter than with the traditional approach (with capital construction or reconstruction of the building), and funds can be invested gradually, as they grow needs and appropriate data center scaling. The latter circumstance is extremely important for him in the current conditions.

Alexander Barskov- Leading editor of the Journal of Network Solutions / LAN. He can be contacted at:

The 4x4 Group of Companies has been present on the container data center market for more than 10 years. In the portfolio of implemented projects of the company as Container Data Centers of its own production.

In 2016, the second generation of its own solution was developed, which was called Mtech (Modular Technology).

Modular data centers own design designed for operation in any regions of the Russian Federation, having about 50% of Russian components, developed taking into account many years of experience in this field and the opinions of our Dear Customers.

Features and benefits of KTsOD and MTsOD 4x4 MTech

Designed in Russia for Russian conditions

The design of the container module was developed by a team of professional architects and is designed to accommodate engineering IT equipment with maximum protection against external conditions(wind and snow load, external temperature) in any region of the Russian Federation;

Wide vestibule

Allows you to place a wardrobe for staff (included in the delivery) and install an automatic machine for putting on shoe covers to maintain constant cleanliness in the Machine Room of the Moscow Data Center, as well as provide the possibility of bringing equipment in or out of the Machine Room of the Moscow Data Center;

Additional pitched roof

Helps to prevent the accumulation of snow cover in winter period and accumulation of water during the melting of snow cover;

Enough space for comfortable service inside

The internal width of the container module is 3000 mm, which allows you to install server racks 1200 mm deep in a static position and at the same time provide a service space in front of them of at least 1200 mm. This approach allows us to offer the Customer a solution with the ability to increase capacity during operation and reduce capital costs at the first stage. It also provides ease of maintenance and installation of IT equipment in server racks.

Hot aisle containerization

In solutions with an IT rack load of 8 kW or more, it is planned to install a hot aisle containerization system, for maximum effective work air conditioning systems.

Factory acceptance test

Carrying out factory acceptance tests of the solution before sending it to the Customer's site, as well as tests at the Customer's site, according to the test procedure agreed with him, allows to achieve guaranteed and trouble-free operation of the solution during operation;

Reliability

The use of the most reliable equipment on the market for the implementation of the main engineering systems allows us to ensure a peaceful sleep for the Customer for the next at least 10 years (the average service life of the engineering equipment of the data center);

Download presentation on Mtech 4x4 Containerized and Modular Data Centers (3.5 MB) >>>

Disadvantages of ISO container

Initially, manufacturers offered KDPC in standard ISO containers, but this solution has a number of disadvantages. First of all, this is the width of a standard ISO container - 2.44 m.

When placing racks in a standard ISO container, it turns out that 10cm is spent on container insulation, 1m - rack depth and 1m in front of the rack for equipment installation. Thus, only 34 cm remains behind the rack. A person will not squeeze into this space, which means that switching operations for the racks in the middle will be difficult. Therefore, manufacturers either made roll-out racks (requires de-energizing them when moving), or service doors in the back of the container (impossible to use in winter).

Also, standard ISO modules have a flat and very soft roof, which cannot withstand the snow load in Russia. When trying to clear snow, the roof without additional reinforcement will sag and deform. If the roof is not cleaned of snow, it is highly likely to leak in the spring.

Given these shortcomings, 4x4 offers solutions in non-ISO modules of non-standard size, as a rule it is 3.3 meters wide and high, which makes it possible to provide a normal hot aisle. The non-standard container has ISO modules for crane lifting. Transportation of a non-standard container requires obtaining a permit for the transportation of oversized cargo, now this does not present any difficulties. A good container data center is no different in terms of maintenance from a regular server one. It is highly desirable to have an entrance vestibule that will allow you to level out climate changes at the entrance to the container, and also provide a place to place things necessary in the data center (shoe covers, vacuum cleaner, suction cup for raised floors, etc.).

A modular data center differs from a container data center in that it consists of separate modules that are joined together. We have installed modular solutions in Russia and we know that modern technologies allow to ensure complete tightness of the structure. A modular data center can theoretically be of any size. The more modules in one assembly, the cheaper the data center costs in terms of the cost of one rack.

If we compare the cost of traditional and modular data centers, then the modular data center is more cost-effective already when placing 20 racks or more. But the main advantage of containerized and modular data centers is the speed of deployment. As a rule, from the moment of order, the first modules start working after 4 months. A traditional data center in a building, according to our estimates, requires about 2 years to build.