My business is Franchises. Ratings. Success stories. Ideas. Work and education
Site search

Operational data center - “mobile” or “modular”? Modular data center: overview of solutions. Factory acceptance tests

The 4x4 Group of Companies has been represented in the Container Data Center market for more than 10 years. In the briefcase completed projects companies such as Container Data Centers of their own production.

In 2016, the second generation of its own solution was developed, which was called Mtech (Modular Technology).

Modular data centers of our own design, intended for operation in any region of the Russian Federation, having about 50% of Russian components, developed taking into account many years of experience in this field and the opinions of our respected Customers.

Features and benefits of DC and MSDC 4x4 MTech

Developed in Russia for Russian conditions

The design of the container module was developed by a team of professional architects and is intended to accommodate engineering IT equipment while providing maximum protection from external conditions(wind and snow load, external temperature) in any region of the Russian Federation;

Wide vestibule

Allows you to place a wardrobe for personnel (included in the delivery) and install an automatic machine for putting on shoe covers, to maintain constant cleanliness in the Machinery Room of the MSDC, as well as to provide the ability to bring or remove equipment from/to the Machinery Room of the MSDC;

Additional pitched roof

Helps prevent snow from accumulating in winter period and accumulation of water when snow cover melts;

Sufficient space for comfortable servicing inside

The internal width of the container module is 3000 mm, which allows you to install server racks 1200 mm deep in a static position and at the same time provide a service space in front of them of at least 1200 mm. This approach allows us to offer the Customer a solution with the ability to increase capacity during operation and reduce capital costs at the first stage. This also ensures ease of maintenance and installation of IT equipment in server racks.

Hot aisle containerization

In solutions with an IT load per rack of 8 kW, it is planned to install a hot aisle containerization system for maximum efficient work air conditioning systems.

Factory acceptance tests

Carrying out factory acceptance tests of the solution before sending it to the Customer's site, as well as tests at the Customer's site, according to the test methodology program agreed with him, allows us to achieve guaranteed and trouble-free operation of the solution during operation;

Reliability

The use of the most reliable equipment on the market for the implementation of basic engineering systems allows us to ensure a peaceful sleep for the Customer for the next at least 10 years (the average service life of data center engineering equipment);

Download presentation on Container and Modular Data Centers 4x4 Mtech (3.5 MB) >>>

Disadvantages of ISO container

Initially, manufacturers offered data processing centers in standard ISO containers, but this solution has a number of disadvantages. First of all, this is the width of a standard ISO container - 2.44 m.

When placing racks in a standard ISO container, it turns out that 10 cm is spent on insulating the container, 1 m is the depth of the rack and 1 m in front of the rack for installing equipment. This leaves only 34 cm behind the rack. A person cannot squeeze into this space, which means switching operations for the racks in the middle will be difficult. Therefore, manufacturers either made roll-out racks (requires them to be de-energized when moving) or service doors in the rear of the container (impossible to use in winter).

Also, standard ISO modules have a flat and very soft roof, which cannot withstand snow loads in Russia. When trying to clear snow, the roof without additional reinforcement will sag and become deformed. If the roof is not cleared of snow, it is highly likely to leak in the spring.

Considering these shortcomings, 4x4 offers solutions in non-ISO modules of non-standard size, usually 3.3 meters in width and height, which allows for a normal hot aisle. The non-standard container has ISO modules for lifting by crane. Transporting a non-standard container requires obtaining permission to transport oversized cargo; now this does not present any difficulties. A good container data center is no different from a regular server room in terms of maintenance. It is very desirable to have an entrance vestibule that will help level out climate changes when entering the container, and will also provide a place to place things necessary in the data center (shoe covers, a vacuum cleaner, a suction cup for a raised floor, etc.).

A modular data center differs from a container data center in that it consists of separate modules that are connected to each other. We have installed modular solutions in Russia and know that modern technologies allow to ensure complete tightness of the structure. A modular data center can theoretically be of any size. The more modules in one assembly, the cheaper the data center costs in terms of the cost of one rack.

If we compare the cost of traditional and modular data centers, then the modular data center turns out to be more profitable in price even when placing 20 racks or more. But the main advantage of container and modular data centers is the speed of deployment. As a rule, from the moment of ordering, the first modules begin to work within 4 months. A traditional in-building data center, we estimate, requires about 2 years to build.

As always, we tried to make the model problem as close as possible to real projects. So, the fictitious customer planned for 2015 the construction of a new corporate data center according to traditional technology(starting from capital construction or reconstruction of a building). However, the worsening economic situation forced him to reconsider his plans. It was decided to abandon large-scale construction, and to solve current IT problems, to organize a small modular data center in the courtyard on its territory.

Initially, the data center is required to install 10 racks with IT equipment with a power of up to 5 kW each. At the same time, the solution must be scalable in two parameters: by the number of racks (in the future, given a favorable economic situation and the development of the customer’s business, the data center should accommodate up to 30 racks) and by the power of an individual rack (20% of all racks should support equipment with a power of up to 20 kW on the rack). The preferred “quantum” of expansion is 5 racks. For more details, see the “Task” sidebar.

TASK

The fictitious customer planned for 2015 to build a new corporate data center using traditional technology (starting with a major reconstruction of the building). However, the worsening economic situation forced him to reconsider his plans. It was decided to abandon large-scale construction, and to solve current IT problems - to organize a small modular data center.

Task. The customer is going to deploy a small but highly scalable data center on its industrial territory. Initially, you need to install 10 racks with IT equipment, the power of each rack is up to 5 kW. In this case, the solution must be scalable in two ways:

  • by the number of racks: in the future, given a favorable economic situation and the development of the customer’s business, the data center will have to accommodate up to 30 racks;
  • by power of an individual rack: 20% of all racks should support equipment with a power of up to 20 kW (per rack).

It is advisable to invest in data center development as needed. The preferred “quantum” of expansion is 5 racks.

Features of the site. The site is provided with electricity (maximum supplied power - 300 kW) and communication channels. The size of the site is sufficient to accommodate any external equipment, such as diesel generator sets, chillers, etc. The site is located within the city limits, which imposes additional conditions on the noise level from the equipment. In addition, locating a data center in a city implies high level air pollution.

Engineering systems. Fault tolerance level - Tier II.

Cooling. The customer did not specify a specific cooling technology (freon, chiller, adiabatic, natural cooling) - its choice is at the discretion of the designers. The main thing is to ensure efficient heat removal at the rack capacities specified in the problem. Minimizing energy consumption is encouraged - it is necessary to meet the limitations on the supplied power.

Uninterruptible power supply. The specific technology is also not specified. The autonomous power supply time in case of an accident is at least 10 minutes from batteries with a transition to operation from the diesel generator set. The fuel supply is at the discretion of the designers.

Other engineering systems. The project should include the following systems:

  • automatic gas fire extinguishing installation (AUGPT);
  • access control and management system (ACS);
  • structured cabling system (SCS);
  • raised floor and other systems are at the discretion of the designers.

The data center must have complex system management, providing monitoring and control of all major systems: mechanical, electrical, fire, security systems, etc.

Additionally:

  • the customer asks to indicate the deadline for the implementation of the data center and the specifics of equipment delivery (access roads, special mechanisms for unloading/installation);
  • If the designer considers it necessary, IT equipment for the data center can be immediately offered.

We turned to leading suppliers of modular data centers with a request to develop projects for the customer. As a result, 9 detailed solutions were obtained. Before we look at them, note that the Journal of Networking/LAN has been tracking trends and trying to categorize modular data centers for many years. These issues are addressed, for example, in the author’s articles “” (LAN, No. 07–08, 2011) and “” (LAN, No. 07, 2014). In this article we will not waste time on presenting trends, but refer those interested to the indicated publications.

(NOT)CONSTRUCTIVE APPROACH

Very often, membership in the “Modular Data Center” category is determined by the type of design. At the first stage of the development of this market segment, data centers based on standard ISO containers were usually called “modular”. But in the last year or two, container data centers have faced a wave of criticism, especially from solution providers. new wave”, which focus on the fact that standard containers are not optimized for hosting IT equipment.

It is incorrect to categorize data centers as modular based on the type of design. “Modularity” should be determined by the possibility of flexible scaling through a coordinated increase in the number of racks, the power of uninterruptible guaranteed power supply systems, cooling and other systems. The design may be different. So our customer received proposals based on standard ISO containers, specialized containers and/or blocks, modular premises, and structures assembled on site (see table).

Let's say Huawei specialists chose for our customer a solution based on standard ISO containers, although the company's portfolio of solutions includes modular data centers based on other types of constructs. Mikhail Salikov, director of the data center department of this company, explained this choice by the fact that the traditional container solution, firstly, is cheaper, secondly, it allows for faster implementation of the project and, thirdly, is less “demanding” for site preparation. In his opinion, in the current conditions, taking into account the conditions set by the customer, this option may turn out to be optimal.

Several proposals received by the customer are based on containers, but not standard ones (ISO), but specially designed for building modular data centers. Such containers can be joined together to form a single machine room space (see, for example, the project by Schneider Electric).

One of the features of container-based options is more stringent requirements for means of transportation and installation. For example, to deliver NON-ISO25 container modules from Schneider Electric, a low-platform trailer with a useful length of at least 7.7 m is required. For unloading and installation, a crane with a lifting capacity of at least 25 tons is required (see Fig. 1).

When using smaller modules, the requirements are more relaxed. For example, CommScope DCoD modules are transported in a standard Eurotruck, and their unloading and installation can be carried out by a conventional forklift with a lifting capacity of at least 10 tons (see Fig. 2).

Finally, for installation of fast transport different (on-site) design, for examplemeasures of the NOTA IDC of the Utilex company, heavy lifting mechanisms are not required at all - self-sufficientloader with a lifting capacity of 1.5 tons.

All components of this IDC transfer are packed disassembled in a standard ISO container, which, according to Utilex representatives, significantly reduces the cost and delivery time to the placement site. This feature is especially important for remote locations without developed transport infrastructure. As an example, a project for the implementation of a distributed data center in the Magadan region for the Polyus Gold company is given: the total length of the delivery route was more than 8900 km (5956 km by rail, 2518 km by sea; 500 km by road).

POWER SUPPLY

Part of the system is guaranteed th uninterruptible power supplyselection of different options for givenThe project is small - only the classic “trio”: static UPS, batteries and a diesel generator set (DGS). It is not practical to consider dynamic UPSs as an alternative at such capacities.

Most projects use modular UPSs, the power of which is increased by adding power modules. Monoblock UPSs are also offered - in this case, the system power is increased by installing the UPS in parallel. Some companies specified specific UPS modules, others limited themselves to general recommendations ().

The situation is similar with diesel generator sets: some companies recommended specific models, others limited themselves to calculating powerness. More detailed informationon DGU is presented in full descriptionnyami projects available on the site .

COOLING

But there are many cooling options offered. Most of projects are based on freon air conditioners - not the most energy efficient, but the cheapest solution. Two companies - Huawei and Schneider Electric - have placed their bets on chiller systems. As Denis Sharapov, business development manager at Schneider Electric, explains, a cooling system project with freon air conditioners is cheaper, but to ensure fault-tolerant operation it will be necessary to install a much larger UPS (to power the compressors), as a result the cost of the projects will be approximately equal. Therefore, based on the calculations carried out, the Schneider specialist made a choice in favor of the chiller option - more reliable and functional. (For emergency cooling in chiller systems, a storage tank with cold coolant is used, and therefore the need for uninterruptible power is significantly lower - it is enough to power only the pumps for pumping the coolant from the UPS.)

The CommScope solution recommended to the customer by Croc specialists uses direct free cooling with adiabatic cooling. As Alexander Lasy notes, Technical Director department of intelligent buildings of the Croc company, the use of direct free cooling is almost all year round provides significant energy savings, and the adiabatic cooling system proposed (in the CommScope solution) does not require serious water preparation, since humidification occurs using inexpensive replaceable elements. The costs of replacing them are not comparable with the costs of deep water treatment. For post-cooling and backup cooling of the data center in this case, according to a Croc specialist, it is most advisable to use a freon direct evaporation (DX) system. In the presence of free cooling and adiabatic cooling, the total time of use of the backup/additional system during the year is extremely small, so there is no point in striving for its high energy efficiency.

The LANIT-Integration company proposed an indirect free-cooling system with adiabatic cooling. In such a system, there is no mixing of external and internal air flows, which is especially important when locating a data center within a city due to high air pollution. A chiller system was chosen as an additional one in this project - it will be turned on only during the most unfavorable periods for the main system.

MISSION IMPOSSIBLE?

There was a hidden catch in our task that most of the competitors did not notice (or rather, did not want to notice). The total total IT equipment power (240 kW) and the maximum supplied power (300 kW) specified by the customer leave only 60 kW for engineering and other auxiliary systems. Fulfilling this condition requires the use of very energy-efficient engineering systems: PUE = 1.25 (300/240).

Theoretically, achieving such an indicator is quite possible using direct free cooling and adiabatic cooling, proposed by Croc and CommScope. And the latter has examples of projects abroad where even lower PUE values ​​are achieved. But the effectiveness of these cooling technologies highly depends on the level of pollution and climatic parameters at the location. In this task, sufficient data were not provided, so at this stage it is impossible to say unambiguously whether direct free cooling will help “meet” the limitation on the supplied power.

The solution with indirect free cooling proposed by LANIT-Integration indicates a calculated PUE of 1.25–1.45. Therefore, this project does not fit within the given constraints. As for solutions with chiller cooling systems and freon air conditioners, their energy efficiency is even lower, which means that the overall consumption of the data center is higher.

For example, according to the calculations of Denis Sharapov, the peak energy consumption of the solution proposed by Schneider Electric is 429.58 kW - at maximum IT load (240 kW), ambient temperature above 15 ° C, at the time of charging the UPS batteries, taking into account all consumers (including including monitoring systems, AGPT, internal lighting, access control systems). This is 129 kW more than the supplied power. As an option to solve the problem, he suggested excluding highly loaded racks (20 kW) from the configuration or ensuring a regular supply of fuel to ensure the constant operation of a diesel generator set with a power of at least 150 kW.

Of course, operating a data center at 100% utilization is very unlikely - in practice this almost never happens. Therefore, to determine the total power consumption of the MSDC, GreenMDC specialists proposed taking the demand coefficient for the total power of IT equipment equal to 0.7. (According to them, for corporate data centers the value of this coefficient is in the range from 0.6 to 0.9.) Taking into account this assumption, the estimated power for the turbine hall will be 168 kW (24 cabinets of 5 kW; 6 cabinets of 20 kW; demand coefficient - 0.7: (24x5 + 6x20)x0.7 = 168).

ADJUSTMENT OF THE PROBLEM

But even taking into account the above comment about incomplete load, for reliable power supply to the data center, the customer will apparently have to decide on increasing the power supplied to the site and give up the dream of creating a data center with a PUE value at the level of the world's best indicators of objects of the scale of Facebook and Google. And the average corporate customer does not need such energy efficiency - especially considering the low cost of electricity.

As Alexander Lasy believes, since the power of the data center is very small, it does not make much sense to pay increased attention to energy efficiency, since the cost of solutions that can significantly reduce energy consumption can significantly exceed the amount of savings over the period life cycle Data center.

So, so that the customer is not left without a data center at all, we are removing the limitation on the supplied power. What other adjustments are possible? Note that many companies scrupulously fulfilled the customer’s wishes regarding the number/power of racks and the “quantum” of expansion of 5 racks. And the Croc and CommScope companies offered an even smaller “quantum” - modules for 4 racks.

However, a number of companies, based on the standard designs available in their portfolio of proposals, have somewhat deviated from the conditions of the task. For example, as Alexander Perevedentsev notes, Chief Specialist sales support department of the Technoserv company, “according to our experience at this moment“The development (scaling) of data centers in clusters with a step of 15–20 racks with an average power per rack of at least 10 kW is relevant.” Therefore, Technoserv proposed a solution for 36 racks of 10 kW of electrical power per rack with an increment of 18 racks. Containers for 18 racks also appear in the Huawei project, thanks to which the customer can ultimately receive 6 more racks than he requested.

SITE PREPARATION

The site for installing the data center should be prepared - at least leveled. For the data center proposed to the customer by Croc, it is necessary to have a concrete foundation in the form of a load-bearing slab designed for the appropriate load.

In addition, the site must be equipped with ebbs and drains to drain rain and melt water. “Since there is sometimes quite a lot of snow in our country, in order to avoid breakdowns and leaks, it is advisable to equip the site with easily assembled sheds,” recommends Alexander Lasy.

Construction of a concrete foundation on site is provided for in most projects. At the same time, a number of experts pointed out that the cost of constructing a foundation is not comparable with the total cost of the project, so it is not worth saving on it.

The NOTA MSDC of the Utilex company stands apart, for which no foundation is required. Such a data center can be installed and launched on any flat area with a slope of no more than 1–2%. Details are below.

TIME…

As Alexander Lasy notes, the most important and time-consuming process is the design of a data center, since errors made at the design stage are extremely difficult to eliminate after the modules are released from production. According to him, it usually takes from 2 to 4 months to prepare and approve technical specifications, design, approval and approval of a project. If the customer chooses standard solutions, the process can be reduced to 1–2 months.

Should the customer select a CommScope solution, production and delivery of pre-assembled modules and all associated equipment will take 10-12 weeks for starter kits and 6-8 weeks for add-on modules. Assembling a starter kit for a line of 5 modules on a prepared site takes no more than 4–5 days, commissioning work takes 1–2 weeks. Thus, after agreeing and approving the project, concluding a contract and paying the necessary advance (usually 50–70%), the data center will be ready for operation in 12–14 weeks.

The project implementation period based on the LANIT-Integration solution is about 20 weeks. In this case, design will require about 4 weeks, production (including cooling systems) - 8 weeks, delivery - 4 weeks, installation and launch - another 4 weeks.

Other companies also indicate similar deadlines, and it does not make much difference where the assembly facilities are located - abroad or in Russia. Let's say the average turnkey delivery time for a Schneider Electric MSDC is 18–20 weeks (without design). The duration of the main stages (related to the installation of the first and subsequent modules and their turnkey delivery) during the construction of the Utilex NOTA MSDC ranges from 12 weeks. And the duration of the stages associated with the expansion of each module (adding 5 racks and corresponding engineering systems to it) is from 8 weeks.

Obviously, the dates indicated are approximate. Much depends on geographical location the customer’s site, the quality of work organization and the interaction of all participants in the process (customer, integrator and manufacturer). But in any case, it takes only six months or a little more from the idea to the full implementation of the project. This is 2–3 times less than when constructing a data center using traditional methods (with capital construction or reconstruction of the building).

…AND MONEY

We did not receive information about the cost from all participants. Nevertheless, the data obtained allow us to get an idea of ​​the structure and approximate amount of costs.

Alexander Lasy described in detail to the customer the structure and sequence of his costs when choosing a solution from Croc and CommScope. So, 10% will be spent on design, 40% on launching the starter kit (12 IT racks), 5% on completing the first line (expansion + 4 IT racks), 35% on launching the starter kit on the second line (8 IT racks) -racks), 5% for expansion (+4 IT racks) and another 5% for further expansion (+4 IT racks). As you can see, the distribution of costs is uniform - this is exactly what the customer wanted when he thought about choosing a modular data center.

Many customers believe that after recent fluctuations in the ruble exchange rate, localization of the proposed solutions is important in reducing and/or fixing costs. According to Alexander Perevedentsev, Technoserv employees have been actively searching for manufacturers of domestic equipment for the last six months, in particular in the field of engineering systems. At the moment, when implementing a data center, the degree of localization of engineering systems is 30%, and in high-availability solutions, which include latest version modular data center "IT Crew", - no less than 40%. The cost of the IT Crew MSDC for 36 racks with a capacity of 10 kW each with the possibility of further expansion and Tier III fault tolerance level will vary from 65 to 100 thousand dollars per rack. When converted at the rate of 55 rubles. per dollar it turns out from 3.5 to 5.5 million rubles. behind the counter. But this, we emphasize, is a Tier III solution, while most other companies offered Tier II, as requested by the customer.

If Technoserv did not fail to boast of a localization level of 40%, then representatives of Utilex modestly kept silent about the fact that in their solution this figure is significantly higher, since all the main engineering equipment, including air conditioners and UPSs, are domestically produced. The cost of the Utilex NOTA MSDC (based on 24 racks of 5 kW and 6 racks of 20 kW) is $36,400 per rack. At the rate of 55 rubles. For 1 dollar it turns out to be about 2 million rubles. behind the counter. This amount includes all delivery costs to the location within the Russian Federation and the cost of all installation and startup work, including travel and overhead costs. As Utilex representatives note, the total cost of the MSDC can be reduced by placing the entire infrastructure in one large NOTA module (5.4x16.8x3 m): 10 racks at the first stage and adding 5 racks at subsequent stages. But in this case, capital costs at the first stage will need to be increased, and if the customer refuses to scale the MDC, the funds will be spent ineffectively.

Dmitry Stepanov, director of the business direction of the President-Neva Energy Center company, estimated the cost of one module (container) for 10-12 racks at 15-20 million rubles. This amount includes the cost of all engineering systems included in the module, including air conditioners and UPS, but does not include the cost of the diesel generator set. According to him, if you choose cheaper engineering components instead of UPSs and air conditioners from Emerson Network Power, the cost can be reduced to 11–12 million rubles.

The cost of the GreenMDC project at the first stage is approximately 790 thousand euros (site preparation - 130 thousand euros, the first stage of the IDC - 660 thousand euros). The total cost of the entire project for 32 racks, including diesel generator sets, is 1.02 million euros. At the exchange rate at the time of preparation of the material (59 rubles per euro), it turns out to be 1.88 million rubles in terms of one rack. This is one of the most profitable options.

In general, if we talk about a solution based on freon air conditioners, the “starting point” for the customer can be a level of 2 million rubles in terms of one rack. The option based on a chiller system is approximately 1.5 times more expensive. Moreover, the difference in the cost of “Russian” and “imported” solutions is unlikely to vary greatly, since even domestic air conditioners (“Utilex”) use imported components. Unfortunately, modern compressors are not produced in Russia, and nothing can be done about it!

FEATURES OF THE PROPOSED SOLUTIONS

"GrandMotors"

The main IT module (“Machine room”) is formed on the basis of an all-metal container block and consists of a hardware compartment and a vestibule. To ensure a given “quantum” of growth, 4 racks with a capacity of up to 5 kW (cooling by row air conditioners) and 1 rack with a power of up to 20 kW (cooling by a similar air conditioner directly docked to it) are installed in the hardware compartment of each IT module (see Fig. 3). The UPS with batteries is located in the same compartment. As air conditioners, the customer is offered (to choose from) Emerson CRV or RC Group COOLSIDE EVO CW units, and as UPS - devices from the GMUPS Action Multi or ABB Newave UPScale DPA series.

The vestibule houses electrical panels and gas fire extinguishing system equipment. If necessary, it is possible to organize a workplace for a dispatcher or duty personnel in the vestibule.

As Daniil Kulakov, technical director of GrandMotors, notes, the use of in-line air conditioners is dictated by the presence of racks of different types in terms of power. In the case of racks of the same power, to increase the efficiency of filling the computer room area, it is possible to use outdoor air conditioners (for example, Stulz Wall-Air) - such a solution will reduce the energy consumption of the cooling system through the use of free-cooling mode.

At the first stage, it is enough to install two “Machine Room” modules and one “Power Module” on the site - the latter houses the GMGen Power Systems diesel generator set from SDMO Industries with a capacity of 440 kVA. Subsequently, it is possible to increase the available computing power of the data center with a discreteness of one “Machine Room” module (5 racks), while the built-in capacity of the diesel generator set is designed for 6 such modules. The monitoring and control system provides access to each piece of equipment and upper level implemented on the basis of a SCADA system.

All provided engineering solutions provide N+1 redundancy (with the exception of diesel generator sets - if necessary, a backup generator can be installed). The proposed architecture for building engineering systems makes it possible to increase the level of redundancy already during the creation of a data center.

"Krok"/CommScope

The world's leading provider of cabling infrastructure solutions, CommScope is relatively new to the modular data center market, allowing it to address many of the shortcomings of first-generation products. She proposed building data centers from modules pre-assembled in production high degree readiness, calling its solution “Data Center On Demand” (DCoD). These solutions have already been installed and are successfully used in the USA, Finland, South Africa and other countries. In Russia, with the participation of Croc, three DCoD projects are currently being developed.

DCoD base modules are designed to accommodate 1, 4, 10, 20 or 30 racks of IT equipment. Alexander Lasy, technical director of the intelligent buildings department at Krok, who presented the project to our customer, suggested assembling a solution from standard DCU-4 modules designed for 4 racks (you can order non-standard ones, but this will increase the cost of the project by about 20%). Such a module includes cooling systems (including automation for control), primary distribution and switching of power supplies.

The starting block will consist of 4 standard modules (see Fig. 4). Its total capacity is 16 racks, which will allow you to place a UPS of the required power and batteries, as well as equipment for security and monitoring systems, in 6 redundant rack spaces. The final configuration of the “machine room” (line) will consist of 5 modules of 4 racks each, that is, a total of 20 racks, 2 of which can have a total IT equipment power of up to 20 kW. In this case, 1 module (4 racks) will be allocated for an uninterruptible power supply system (UPS) and can be isolated from the main turbine room by a partition to restrict access for maintenance personnel.

The second line is completely similar in design to the first, but its starter kit consists not of 4, but of 3 modules for 4 racks. It will immediately allocate space for UPS, security and monitoring systems. However, as Alexander Lasy notes, as a result of gaining experience in operating the first line, adjustments may be made to the original configuration plan.

The power of the first line of IT equipment in the initial installation (12 racks) will be about 60 kW, and after expansion (16 racks) - 90 kW, including channel-forming equipment of providers and main data center switches. To ensure the operation of the data center in the event of an external power failure, it is necessary to provide uninterrupted power supply to fans and cooling system controllers. The power of this equipment will be approximately 10 kW. In total, the UPS power will exceed 100 kW. A modular UPS for such a load with an autonomy time of at least 10 minutes and a power switching cabinet will occupy approximately 4 rack spaces.

As mentioned above, the highlight of the proposed solution is the use application for cooling direct free cooling with adiabatic humidification. For after-cooling and backup cooling, a freon system (DX) is proposed.

"LANIT-Integration"

The customer was offered a solution based on the SME E-Module modular physical protection room (see Fig. 5): protected rooms of any geometry can be quickly erected from standard components for data centers with an area from 15 to 1000 m2. For our task, the area of ​​the room is at least 75 m2. Among the design features are a high level of physical protection from external influences (due to a steel beam-column load-bearing frame and reinforced structural panels), fire resistance (60 min according to EN1047-2) and dust and moisture protection (IP65). Good thermal insulation properties allow you to optimize cooling/heating costs.

To cool IT equipment, an indirect free cooling system SME FAC (Fresh Air Cooling) with adiabatic humidification was selected. In such a system there is no mixing of external and internal air flows. SME FAC blocks were selected for the project, allowing for the removal of up to 50 kW of heat each (for 10 racks of 5 kW each). They will be added as needed - when the number of racks increases, as well as when the load on individual racks increases to 20 kW.

Around the E-Module MPFZ, it is necessary to provide an area for the installation and maintenance of SME FAC cooling units with a perimeter width of at least 3 m. In addition, the project provides for a chiller system, but it will be used only during the most unfavorable periods for the main system. According to LANIT-Integration estimates, the data center will be able to operate without turning on chillers up to 93% of the time a year (at external air temperatures up to +220C).

Cold air will be distributed under the raised floor. If necessary, the MCDC can be equipped with the SME Eficube cold row containerization system.

The UPS is implemented on the basis of monoblock UPS Galaxy 5500 from Schneider Electric. At the first stage, 2 Galaxy 5500 80 kVA sources will be installed. Subsequently, the required number of UPSs will be added to the parallel. A parallel system can have up to 6 such modules, therefore it is capable of supporting a load of 400 kVA in N+1 mode.

Control and security solution environmental safety is also based on the Schneider Electric product - the InfraStruXure Central system.

LANIT-Integration's proposal looks like one of the most solid, especially in terms of the level of room security and the composition of the cooling system. However, a number of points - for example, the proposal to deploy a full-fledged chiller system that will be used only a small part of the time of the year - suggest high cost this project. Unfortunately, we were not provided with cost information.

"President-Neva"

The company offered the customer a solution based on a Sever class block container. Thanks to this design, the solution can operate at temperatures down to –50°C, which is confirmed by the positive experience of operating three Gazpromneft-Khantos MSDCs at Priobskoye field in the Far North region (in total, the President-Neva company has 22 built MDCs).

The container has transverse partitions dividing it into three rooms: a vestibule and two rooms for cabinets with IT equipment, as well as a partition between the cold and hot corridors inside the IT rooms (see Fig. 6). In one room, 8 racks of 5 kW will be installed, in the other - 2 high-load racks of 18 kW. Roller doors are installed between the rooms, when opened, single cold and hot corridors are provided in the equipment compartment.

The engineering core of the data center is Emerson Network Power equipment. To cool a room with 5 kW racks, Liebert HPSC14 ceiling air conditioners (cooling capacity 14.6 kW) with free cooling support are used. It houses four such air conditioners. Another Liebert HPSC14 is installed in a room with heavily loaded racks and UPS (UPS and battery). But it only serves as a closer, and the main load is removed by more powerful in-line air conditioners Liebert CRV CR035RA. Thanks to the emergency free cooling function, the Liebert HPSC14 is able to provide cooling during that short period of time when the main power supply is turned off when the diesel generator set starts up.

The UPS consists of a modular Emerson APM series UPS (N+1 redundancy) and a battery pack. An external UPS bypass is provided in the input distribution device (IDU), located in the vestibule. An FG Wilson unit with a capacity of 275 kVA is proposed as a diesel generator set.

The customer can start the development of his data center with a solution for 5 kW racks, and then, when the need arises, the container will be retrofitted with CRV air conditioners and UPS modules necessary to operate high-load racks. When the first container is filled, a second one will be installed, then a third one.

Although in this decision the customer was offered Emerson Network Power engineering systems, as Dmitry Stepanov emphasizes, the MDCs of the President-Neva company are also implemented using equipment from other manufacturers - in particular, companies such as Eaton, Legrand and Schneider Electric. This takes into account the customer’s corporate standards for engineering subsystems, as well as all technical requirements manufacturer.

"Technoserv"

Modular data center "IT Crew" is comprehensive solution based on prefabricated structures with pre-installed ones (at the factory) engineering systems. One data center module includes a server unit for installing IT equipment, as well as an engineering unit to ensure its uninterrupted operation. The blocks can be installed side by side on the same level (horizontal topology) or on top of each other (vertical topology): the engineering block is at the bottom, the server block is at the top. A vertical topology was chosen for our customer.

Two options for implementing the IT Ekipazh MSDC are presented for the customer’s consideration: the “Standard” and “Optimum” configurations. Their main difference is in the air conditioning system: in the “Standard” version a precision air conditioning system using chilled water is implemented, in the “Optimum” - a system using freon coolant.

The server block is the same. Up to 18 server racks can be installed in one block to accommodate active equipment. The computer room is proposed to be formed from 2 server blocks. Server blocks and a corridor block are located on the second floor and are combined into a single technological space (machine room) by dismantling the partitions (see Fig. 7).

Air conditioners (water cooling) are installed in the space under the raised floor of the server block of the “Standard” MSDC. In the Optimum MDC, cooling of IT equipment is carried out using precision air conditioners located in the engineering block. Air conditioning systems are reserved according to the 2N scheme.

The engineering block is divided into 3 compartments: for installation of UPS; refrigeration facilities and (optional) dry cooling tower. The UPS compartment contains a cabinet with a UPS, batteries and switchboard equipment. The UPS has N+1 redundancy.

In the refrigeration compartment of the Standard MCDC, two chillers, a hydraulic module, and storage tanks are installed. The implemented cooling system makes it possible to achieve a redundancy level of 2N within one data center module and 3N within two adjacent data center modules - by combining the circuits of the cooling system. The refrigeration system has a free-cooling function, which can significantly reduce operating costs and increase the service life of refrigeration machines.

It was already mentioned above about the high (40 percent) level of localization of engineering systems of the IT Crew MSDC. However, representatives of Technoserv did not disclose information about which manufacturer’s air conditioners and UPS they use. Apparently, the customer will be able to find out this secret during a more detailed study of the project.

"Utilex"

The NOTA modular data center is created on the basis of a prefabricated, easily dismantled frame structure, which is transported in a standard ISO container. Its basis is a metal frame, which is assembled at the installation site from ready-made elements. The base consists of sealed load-bearing panels, which are mounted on a flat platform made of thermal insulation boards"Penoplex", laid in 3 layers in a checkerboard pattern. No foundation required! The walls and roof of the MCDC are made of sandwich panels. To install equipment in the data center, a raised floor is installed.

The project to create a data center is proposed to be implemented in 5 stages. At the first stage, an MSDC infrastructure is created to accommodate 10 racks with an average consumption of 5 kW per rack and the possibility of increasing the consumption of 2 racks to 20 kW each (see Fig. 8). The UPS is implemented on the basis of a modular UPS manufactured by Svyaz Engineering and a battery that provides autonomous operation for 15 minutes (including consumption technological equipment- air conditioners, lighting, access control systems, etc.). If it is necessary to support an IT load of 20 kW in 2 racks, power modules and additional batteries are added to the UPS.

The air conditioning system is implemented on the basis of 4 precision in-line Clever Breeze air conditioners produced by Utilex with a cooling capacity of 30 kW each. External units are located on the end wall of the MSDC. The controllers included in Clever Breeze air conditioners serve as the basis for a remote monitoring and control system. The latter provides monitoring of the temperature of the cold and hot aisles, the state of the UPS and humidity in the room; analyzes the condition and cooling capacity of air conditioners; performs the functions of a control and management system for access to the premises.

At the second stage, a second MDC module is installed for 5 racks with a load of 5 kW (in this and subsequent expansion “quanta” it is possible to increase the load per rack to 20 kW). At the same time, the module area has a reserve for accommodating 5 more racks (see Fig. 9), which will be installed in the third stage. The fourth and fifth stages of data center development are similar to the second and third. At all stages, the above-mentioned elements of the engineering infrastructure (UPS, air conditioners) are used, which are expanded as needed. The diesel generator set is installed to accommodate the entire planned capacity of the data center at once - Utilex specialists chose the WattStream WS560-DM unit (560 kVA/448 kW), placed in its own climatic container.

As Utilex specialists note, the NOTA MSDC infrastructure allows you to increase the area of ​​modules as the number of racks increases by adding the required number of load-bearing panels to the base. This makes it possible to evenly distribute capital costs for creating modules across stages. But in this option, creating a smaller module (with dimensions 5.4x4.2x3 m) is not economically feasible, since a small reduction in capital costs for the module will be “compensated” by the need to invest twice in the fire extinguishing system (in the second and third stages).

An important advantage of the Utilex NOTA MSDC in the context of focusing on import substitution is the production of most of the data center subsystems in Russia. The autonomous infrastructure of the NOTA MSDC, racks, air conditioning systems, remote monitoring and control are produced by Utilex itself. SBE - Svyaz Engineering company. Automatic gas fire extinguishing system - by Pozhtekhnika company. The computing and network infrastructure of the data center can also be implemented based on solutions Russian production- ETegro Technologies company.

GreenMDC

The customer was offered a modular data center Telecom Outdoor NG, consisting of Telecom modules (designed to accommodate racks with IT equipment), Cooling (cooling systems) and Energy (power supply system, electrical panels). All modules are manufactured at GreenMDC production, and before delivery to the customer, the MCDC is assembled at a test site, where full testing is carried out. Then the MCDC is disassembled and the modules are transported to their destination. Individual modules are metal structures with walls installed and prepared for transportation (see Fig. 10).

The heat removal system from the equipment is based on HiRef precision cabinet air conditioners with cold air supplied under the raised floor. UPS - on Delta modular UPS. Distribution electrical network made with redundant busbars - the racks are connected by installing branch boxes with automatic switches on the busbars.

As part of the first stage of the project, site preparation, installation of a diesel generator set, as well as 3 data center modules are carried out (see Fig. 10): a Telecom module with 16 racks (10 racks of 5 kW each, 6 rack spaces remain unfilled); Cooling module with 2 60 kW air conditioners and Energy module with 1 modular UPS. The UPS consists of a 200 kVA chassis and 3 25 kW modules (for the Delta DPH 200 UPS, the values ​​in kilowatts and kilovolt-amperes are identical). To ensure uninterrupted operation of engineering systems, 1 modular UPS is installed: an 80 kVA chassis and 2 20 kW modules.

At the second stage, the expansion is carried out within the existing MDC modules: 6 more racks are added to the Telecom module. Increasing electrical power is achieved by adding power modules to modular UPSs, and cooling capacity is achieved by increasing the number of air conditioners. All communications (freon pipes, electrical) for air conditioners are laid at the first stage. If it is necessary to install a rack with high consumption, a similar algorithm for increasing power is implemented. In addition, it is practiced to isolate the cold aisle and, if necessary, install active raised floor tiles.

At the third stage, a second Telecom module is installed, which will allow placing up to 32 racks in the MSDC. The increase in the capacity of engineering systems occurs in the same way as it was done at the second stage - up to the maximum design capacity of the MSDC. The operation of expanding the data center is carried out by the supplier’s employees without interrupting the functioning of the installed modules. The production and installation time for the additional Telecom module is 10 weeks. Installation of the module takes no more than one week.

Huawei

The Huawei IDS1000 mobile data center consists of several standard 40-foot containers: one or more containers for IT equipment, a container for an UPS, a container for a refrigeration system. If necessary, a container for duty personnel and storage is also supplied.

For our project, we chose an IT container with 18 racks (6 rows, each with 3 racks) - see fig. 11. At the first stage, it is proposed to place 10 racks for IT equipment, 7 row air conditioners, power distribution cabinets, AGPT cylinders and other auxiliary equipment. At the second and subsequent stages - add 8 racks and 5 row air conditioners to the first container, install the second container and place the required number of racks with air conditioners in it.

The maximum power of one IT container is 180 or 270 kW, which means that up to 10 and 15 kW can be allocated from each rack, respectively. Huawei's proposal does not specifically address the placement of 20 kW racks, but since the allowable power per rack is higher than the 5 kW requested by the customer, the load can be redistributed.

The cooling supply for row air conditioners is carried out from a system of chillers located on the frame of a separate container. In addition, the UPS (UPS and battery) is placed in a separate container. The UPS is configured depending on the calculated load in 40 kW increments (maximum 400 kVA).

As part of the solution, Huawei included a new generation control system NetEco, which allows you to control the functioning of all major systems: power supply (DGS, UPS, PDU, batteries, distribution boards and automatic transfer switches), refrigeration supply (air conditioners), security (ACS, equipment video surveillance). In addition, it is capable of monitoring the status environment, using various sensors: smoke, temperature, humidity and water leakage.

The main feature and advantage of Huawei's offer is that all the main components, including air conditioners and UPSs, are from one manufacturer - Huawei itself.

Schneider Electric

The proposed MDC is based on standard NON-ISO25 all-weather modules with high fire-resistant characteristics (EI90 according to EN-1047-2). Access is controlled by an access control system with a biometric sensor. The modules are delivered to the installation site independently, then joined to form a single structure.

The first stage is the most voluminous and costly stage, at which the base is leveled and strengthened (screed is poured), two chillers are installed (Uniflair ERAF 1022A units with free cooling function), a diesel generator set, an energy module and one IT module (10 racks), the entire pipe system is fully installed wiring. The energy module and the IT module form three rooms: the energy center, the machine room, and the entrance gateway/vestibule (see Fig. 12).

An APC Symmetra 250/500 UPS (with a set of batteries and a bypass), switchboard equipment, a gas fire extinguishing system, main and emergency lighting, a cooling system (2 (N+1) SE AST HCX CW 20kW air conditioners) are installed in the power module room. The UPS powers IT equipment, air conditioners and pumps. The Stage 1 IT Module comes pre-installed with 10 racks (AR3100), PDU, busbar, hot aisle containment and cooling system based on SE AST HCX CW 40kW (2+1). These air conditioners are specially designed for modular data centers: they are mounted above the racks and, unlike, say, row air conditioners, do not occupy useful place in the IT module.

At the second stage, a second one is docked to the first IT module, which comes with 5 racks, and is otherwise equipped in the same way as the first (only at this stage 2 (1+1) of 3 air conditioners are activated). In addition, another chiller is installed, and the modular UPS is retrofitted with the necessary set of batteries and inverters. The third stage of data center development boils down to adding 5 racks to the second IT module and activating a third air conditioner. The fourth and fifth stages are similar to the second and third.

The StruxureWare DCIM class complex is proposed as a control system.

As a result, the customer will receive a solution that fully meets his requirements from one of the world leaders in the field of modular data centers. At this stage, Schneider Electric has not provided information on the cost of the solution.

CONCLUSION

The main design features of the proposed data centers, as well as their cooling and power supply systems, were briefly discussed above. These systems are presented in more detail in the project descriptions available on the website. It also provides information about other systems, including gas fire extinguishing equipment, an access control and management system (ACS), an integrated management system, etc.

The customer received 9 projects that mostly met his requirements. The presented descriptions and characteristics of the products convinced him of the main thing: modern modular solutions allow you to get a fully functional data center, while the project implementation time is several times shorter than with the traditional approach (with capital construction or reconstruction of the building), and funds can be invested gradually as you grow needs and appropriate data center scaling. The last circumstance is extremely important for him in the current conditions.

Alexander Barskov- Leading editor of the Journal of Network Solutions/LAN. He can be contacted at:

A data processing center (DPC) is a building, part of it or a mobile module. The main function of a data center is to host the equipment necessary for storing and processing information. The data center also houses supporting facilities to ensure its operation in accordance with the EIA/TIA-942 standard.

Mobile data centers

Container (modular) data centers are complexes of telecommunications, information and support infrastructure mounted in one or more mobile modules. Individual elements of such data centers are created in the form of specialized containers. They provide rapid deployment of modules, as well as ease of transportation. The modules have an important anti-vandal function.

Typical MDC configuration

Modular data centers: main advantages

  • Opportunities for quick organization at any point and speed of deployment. Production, delivery and connection require no more than 3-5 months.
  • Low cost (compared to standard stationary centers).
  • Scalability. If necessary, you can quickly increase your computing power.
  • Separateness. The premises do not need to be shared with other centers.
  • Flexibility. The solution can be customized to fully meet the specific requirements of the client.
  • High fault tolerance. Critical systems are backed up.
  • Maximum readiness for use. The site does not require any construction or installation of the MDC.
  • High reliability of all systems. Only equipment from the world's leading manufacturers is used.

Data center reliability: levels

When building a data center, strict space requirements and high energy costs are taken into account. There are 4 levels of data center reliability.

  • Tier I. This level is basic. The engineering structure at this level is developed only to meet current needs. The data center may lack such elements as backup power supplies and raised floors. When idle, the center can operate 28.8 hours per year. The data center is used as a server room. The center's fault tolerance is 99.6%.
  • Tier II. To create a data center of this level, backup cooling and power sources and raised floors are required. To carry out repair or maintenance work, equipment is stopped. In idle mode, the center can operate 22 hours a year. The data center is used to organize web servers and back offices. Fault tolerance: 99.75%.
  • Tier III. To create this data center level, you need several sources of backup cooling and power, and a raised floor. To carry out preventive maintenance, update equipment, install or remove it, you do not have to stop the work of the center. Used to organize business-critical applications (BusinessCritical). Downtime does not exceed 1.6 hours per year. Fault tolerance: 99.98%.
  • Tier IV. This level guarantees almost 100: center fault tolerance (99.995%). Any work (including unscheduled) is carried out without stopping work processes. Even a structured cabling system can be backed up. In idle mode, the data center operates 0.4 hours per year. Used to organize Business Continuity and host critical services.

The company "Iso-energo" LLC is ready to offer its experience in the implementation and implementation of projects for the production and supply of mobile communication systems and data centers. One of the company's activities is the production of communication containers and mobile centers data processing for such Customers as: MTS PJSC, Pension Fund Russia, EuroChem Group, JSC IDGC Holding, Armed Forces of the Russian Federation and others.

Having all the necessary resources:

  • Highly qualified specialists.
  • Workshop premises 1400 sq. m.
  • Full complex production tools and specialized machines
  • Appropriate technical base.
  • Partnerships with leading equipment manufacturers and suppliers.
  • Experience in implementing projects of varying complexity.
  • Availability of all necessary licenses and certificates.

The company's specialists are ready:

  • Develop and implement modern concepts for creating Data Processing Centers (DPCs) and Communication Centers.
  • Select modern equipment that meets your requirements.
  • Carry out the entire range of work to build guaranteed uninterruptible power supply (GUSU) systems, starting with design and ending with turnkey delivery of the facility to government supervisory authorities.
  • Justify the use of energy-saving technologies in complex industrial complexes.
  • Provide quality service to SSBE and MCDC.
  • Conduct training for the Customer's service personnel to improve their qualifications.

Specialists "Aiso-energo" We are ready to create a data center of any reliability level, taking into account your individual wishes. We offer data centers based on containers and modules of various types.

A modular data center is a data processing center consisting of standard system elements - modules. A single module is a mobile data center; when several such mobile cells are combined, we get a modular system. Modules can be with racks, with a UPS, with all kinds of auxiliary equipment. They can also be installed indoors. New units are installed as needs grow.

A modular, containerized data center includes IT equipment, power and cooling systems. Once it became possible to combine all of this into one container, proprietary modules from different vendors became increasingly common.

Modular data centers are a technology whose main goal is to reduce and optimize the costs of building IT infrastructure. The way infrastructure resources are distributed in this model is in many ways similar to the distribution of computing power in the cloud. The client pays exactly for the amount of capacity that he uses, and if necessary, he can relatively easily increase or reduce resources. The same is true with the modular construction of data centers: according to the customer’s decision, resources are added or subtracted quite quickly and relatively inexpensively.

Benefits of Mobile Data Centers

Mobile data centers have a number of advantages over stationary data centers. First of all, the advantages of modular data centers were appreciated by those who are looking for disaster-resistant solutions that can work in any conditions - mobile operators, the raw materials sector.

Secondly, these are companies for which business continuity is important and the cost of IT equipment downtime is extremely high, for example financial and Insurance companies, banks.

And thirdly, these are IT service providers who are looking for ways to reduce capital costs for construction.

Everything is learned by comparison, and the low cost of modular data centers becomes obvious when they find themselves on different scales with traditional ones. Firstly, modular data centers do not require capital construction or expensive refurbishment of existing premises. This is such a powerful advantage that it easily covers up any significant disadvantages. The builder and the customer do not need to invest, as they say, “in concrete.” Capital construction easily eats up half the budget for deploying a data center.

In addition, constructed buildings or a data center located on rented premises are not mobile. If the client’s lease expires or the need arises to move to another office, then in the case of a capital building this turns into an investment disaster: it is necessary to invest funds comparable in size to the construction of another data center. And a modular data center can simply be disassembled and deployed in any other space. In addition, it is not tied to the volume of the room, despite the fact that it is the volume of objects commercial real estate turns out to be an obstacle to building up IT infrastructure.

The similarity with the cloud model, which was mentioned just above, is that a modular data center is easily scalable. For example, you have open area an area of ​​one thousand square meters. You can build a modular data center of sufficient capacity for 300 sq. m, and the rest of the area will be filled with new modules as business needs grow. This incremental scaling allows for significant reductions in both capital and operating expenses.

MDCs are more efficient: they do not require special infrastructure, the construction period is about 2-3 months, they are mobile and can be located with sources of cheap energy. In addition, they can be used to expand the capacity of stationary data centers.

If you carry out comparative analysis mobile and stationary data centers, then it will look like this:

Stationary data centerMobile data center

1. Deployment requires a special room

2. Has limitations in scaling.
3. It is difficult to relocate.
4. Project implementation period – 1 year.
5. Installation and commissioning of a stationary center is much more expensive.
6. Operating costs are higher than a mobile center


1. No dedicated building is required to deploy an MSDC, and hence the start-up and operating costs are significantly reduced.

2. Allows you to deploy IT infrastructure at a remote site
3. Suitable for mobile offices or mobile facilities.
4. Can be used to expand the IT infrastructure (together with a stationary center).
5. Possibility of locating close to cheap energy sources, without paying rent for the premises, preserving investments when moving or relocating an office.
6. Project implementation period is 2 - 3 months.

Wherever your business is located, you can always install a mobile center there. The availability of ready-made solutions for the production of block containers, the absence or significant reduction of such stages as infrastructure design for stationary data centers, coordination of design solutions with supervisory authorities, construction and minimal requirements for site preparation lead to a significant reduction in the delivery and installation time of MSDCs compared to stationary centers.

The topic of mobile data centers, or Mobile Data Center (MDC), is often covered in the press, both foreign and Russian. This is of interest to domestic entrepreneurs and IT industry specialists. Mobile data centers are capable of solving a number of problems that often confront Russian enterprises, working in the field information technologies.

Mobile data centers are capable of solving a number of problems that often face Russian enterprises and organizations working in the field of information technology: the need to deploy new racks in the shortest possible time or create new racks at a short time on the basis of existing main and reserve capacities and components of engineering infrastructure in the absence of premises or if the existing premises do not meet the requirements for a data center.

In addition, MDCs allow you to create temporary server premises at a time when the permanent one has not yet been completed or put into operation. Equipment that is necessary for the functioning of the business during this period can be installed in the temporary premises.

Also, in the event of a lack of specialized premises to accommodate a data center, or specialists with the necessary qualifications, or stationary data processing centers that provide the required level of reliability - optimal solution may become an IDC. Mobile data center can be used to expand IT infrastructure. During active expansion into regions or acquisition of other companies, branches are opened without delay, and acquired assets are promptly integrated into a single information infrastructure.

Factors driving demand for mobile data centers include increased Internet traffic, the importance of uptime for Internet applications, and integration with the global economy. Thus, mobile data centers have ceased to be a niche offer, and have become in demand not only in the field, but also in megacities where there is not enough free space.

"Mobile data center": do you need it?

Industry confusion over the term “mobile data center” can not only confuse CIOs, but also lead to them choosing an ineffective solution. Origin of the term The term “mobile” originates from solutions used by the American military. These data centers actually had the ability to move with the equipment installed inside and quickly deploy to a new location. Such data centers were usually located in containers, which subsequently led to a confusion of definitions: container data centers began to be identified with mobile and modular. However, existing “container solutions” have little in common with their military predecessors - most of them cannot be transported with not only IT, but even engineering equipment installed inside (especially on Russian roads). Who needs it? Data center relocation is a very specific need and the likelihood that a company will have one is low. If such a move were to occur, it is unlikely that it would be in the form of a military special operation.

Among the characteristics of “mobile” data centers, most clients consider the following to be key:

  • possibility of delivery to remote places;
  • the ability to choose a location (closer to users, sources of electricity, near the building);
  • reduction of implementation time;
  • possibility of gradual capacity increase (on demand);
  • the possibility of moving in the future to a new place of operation.

It is these options that manufacturers of “mobile” data centers based on ISO container solutions point out as their advantages. Pre-fabricated data center or mobile? It is important to understand: in addition to container (or mobile) solutions, there are pre-fabricated (or assembled from pre-fabricated structures) data centers on the market that not only provide customers with the same capabilities, but also surpass “container data centers” in a number of key characteristics.

Pre-fabricated data centers provide the opportunity for more flexible scaling (quantum scaling across racks rather than modules). In a pre-fabricated data center, there is no problem of lack of service space, which is typical for “container” data centers. At the same time, some prefabricated modular data centers have lower transportation costs with greater flexibility and even speed of delivery than “mobile” ones.

Story

2005-2006: Google and Sun create the first container data centers

Previously, it was believed that Sun Microsystems became a pioneer in the creation of mobile data centers by releasing the first Blackbox product in 2006. However, as it recently became known, the pioneer was Google Corporation, which in the fall of 2005 created its first data center based on 45 containers, each of which contained 1,160 servers and consumed 250 kW of electricity. The power density was 780 watts per square foot, and the cold aisle temperature did not exceed 27.2 degrees.

2010: Only 80 containers sold worldwide

Despite the rapid growth of MSDC offerings from leading manufacturers, this market segment, according to IDC estimates, remains small, although fast-growing. In 2010, just over 80 containers were sold worldwide, which is, however, twice as many as in 2009.

Analysts from J’son & Partners (J&P) call integration the main factors in the development of the commercial data center market in Russia Russian business V world economy, growth of Internet traffic, national projects and government programs, as well as continuity of work as competitive advantage companies. Despite the fact that the demand for MDCs is not yet great (no more than a dozen projects have been implemented so far), more than seven leading Russian IT companies have already developed and offered their own options for building mobile data centers.

At this time, MSDCs are in demand around the world by companies in the communications industry, oil and gas sector, large industrial and manufacturing holdings, metallurgical and energy enterprises.

Some integrator companies in Russia continue to work on technological solutions in this domain:

  • Stack Group, offering the mobile solution Stack.Cube,

Companies such as Cherus and NVision Group also had a number of projects in this direction.

One of the reasons for choosing mobile data centers is the ability to connect them in a short time. On average, the production of an MSDC requires from 3 to 6 months, and the installation and connection time is limited to two weeks.

2011: Study of the efficiency of modular container data centers

The Green Grid consortium published in November 2011 the results of a study on the efficiency of data centers assembled from prefabricated modular blocks already containing necessary equipment and programs. Similar modules are produced by a number of companies, including Dell, SGI, Capgemini and Microsoft.

Modular centers cost less to build due to reduced design costs. Modules tend to be denser than conventional centers, take up less space, and are often more productive because design flaws are identified during the module design stage. Additionally, modular data centers are typically optimized for energy consumption and cooling efficiency, and are designed to operate over wide temperature and humidity ranges. A common challenge for modular centers is reliability, The Green Grid report notes. The level of redundancy, robustness, and guaranteed runtime of different components of a module can vary greatly, resulting in the reliability of the entire module becoming dependent on a single element. Many are also concerned about the physical security of modules.

At this time, no more than 10% of mobile data centers (MDCs) operate in Russia. Activation of this area was expected - more than seven companies are already offering projects to create container data centers. All these companies note an increase in demand for mobile data centers: in 2011, the number of orders for mobile center projects exceeded the number of orders for 2009-2010. Such large companies, like VimpelCom or the aircraft manufacturing concern Irkut, have already appreciated the advantages of mobile data centers, which undoubtedly indicates the growing popularity of MSDCs in the Russian market.

Customers of these projects understand that by packaging a data center in a container, they will receive a completely finished product on a turnkey basis with minimal financial costs and high return on investment. The MSDC can become both the main center for storing and processing data, and a backup one.

2012: APC study by Schneider Electric

According to research from APC by Schneider Electric, modular containerized Data Centers are faster to deploy, more reliable to operate, and have a lower total cost of ownership than data centers built independently from various components.

In 2012, APC by Schneider Electric specialists conducted a detailed study of the problem of power and cooling of modular containerized data centers and a comparative analysis with self-built data centers.

Key issues addressed

Modular data centers can significantly reduce investments in the construction of data centers, primarily due to the absence of the need to build a permanent building and the ability to expand the site as needed. Savings on capital costs can reach 50%. In the context of an economic recession, the demand for such solutions is growing.

The concept of a modular data center (MDC) is built on the idea standard solution- a module with a specified set of power supply, cooling and physical security systems. Most of the components of such a block are already pre-assembled and pre-installed by the manufacturer, which can significantly reduce installation work ready-made solution on the customer's side. In addition, individual modules are independent in terms of life support, which, combined with short deployment times, allows you to increase the power of the data center as needed.

“Thanks to scalability, the possibility of phased commissioning, as well as the independence of each module for critical life support systems, the customer has the opportunity to gradually invest only in the volume of the data center required at the moment. This fully corresponds to the current economic situation of many companies, in addition, in this way the problem of underutilization and idle capacity is solved, which ultimately significantly reduces the payback period of the data center,” says Stanislav Tereshkin, head of the data center department of the Asteros group.

What are the benefits of the MDC?

The economic feasibility of using modular technologies covers four aspects: reduced capital investment, reduced operating costs, no equipment downtime due to capacity expansion as needed, and more short term return on investment due to faster commissioning of the site.

To launch a modular data center, it takes 4–6 months from the start of design, which is 2–3 times less than for a conventional data center. “The accelerated commissioning of modular data centers is also due to the fact that the modules, along with elements of the engineering infrastructure, are assembled directly at the vendor’s plant, where all engineering and installation resources are concentrated. This is much faster than performing this work at the customer’s site,” explains Vsevolod Vorobiev, head of the data center department of the Jet Infosystems network solutions center.

An important factor in reducing time is that modular solutions do not require the construction of a permanent concrete building with thick walls and ceilings. As a rule, modular technologies require the presence of a “prefabricated” room that can be placed either inside another building or in an “open field”. “When asking what a data center should be like, we often come across the stereotype of a concrete building with strong walls and ceilings,” he shares his experience Alexander Perevedentsev, Deputy Head of the Engineering Systems Sales Support Department at Technoserv. - However, in reality, such a data center has no advantages over a modular one. Usually the desire for monumentality fades when the customer compares the construction time and budget with the costs of constructing a modular structure.”

“With proper planning, the design and commissioning period can be reduced by 2 times or more due to the actual absence of construction of capital structures (including design, project examination, obtaining construction permits and the construction itself," comments an expert from NVision Group. Arseny Fomin.

In terms of capital costs, certain savings can be achieved through the use of ready-made standard products, which are cheaper than solutions developed to order. However, an even greater role is played by the ability to commission individual units in stages, thereby reducing the size of the initial investment. “Constructing a data center based on the principle of modularity makes it possible to invest money in stages, creating operational fault-tolerant clusters that allow you to simultaneously solve the business tasks assigned to them (make a profit) and return the investment. The initial investment of the project is 20%–30%, the rest is distributed over time in accordance with the stages of putting the data center into operation,” explains Pavel Dmitriev, Deputy Director of the Department of Intelligent Buildings at Croc.

The overall CAPEX savings when using modular solutions is up to 30%, calculated by the research company 451 Research. System integrators surveyed by CNews cite figures ranging from 15% to 50% savings on capital costs, depending on design features object.

In the area of ​​operating costs, savings occur due to the unification of engineering systems; accordingly, costs for maintenance personnel, the purchase of spare parts and repairs are reduced. In addition, it becomes possible to reduce your energy bill. According to Schneider Electric, the modular approach can reduce the total cost of ownership (Total Cost of Ownership, TCO) by $2–7 per 1 W of data center power. “Electricity savings come from the fact that modular data centers are usually more compact than “classic” ones. The volume of air that needs to be cooled is smaller, so air conditioning systems consume less electricity,” explains Valentin Foss, a representative of the Utilex company.

However, not all market players agree that the savings on OPEX are significant: “Operating costs will be approximately the same compared to a non-modular data center,” believes Arseny Fomin. However, he stipulates that “savings can be achieved due to the absence of the need to maintain and repair a capital building.”

Examples of modular data center projects in 2015–2016.

Customer System integrator Solution Project Description
Cherkizovo Group Jet Information Systems Contain-RZ Modular data center built to meet the requirements international standard TIA-942, the reliability level of the engineering systems complex corresponds to TIER II, and the fault tolerance coefficient is 99.749%. Area – about 200 sq. m. The data center contains 32 highly loaded racks on which the core is located corporate network and server platforms for all major business applications (SAP, 1C, CSB, corporate mail, etc.). The MSDC consists of three modules corresponding in size to a sea container. This allowed us to avoid additional costs associated with transporting modules from the manufacturer to the site. In addition, the modular solution eliminates risks such as design and installation errors.
LinxDataCenter Insystems n/a In 2016, the expansion of the LinxDataCenter data center by 265 rack spaces was completed. The implementation of this project took place in a working data center, in which it was necessary to build three additional modules. Characteristic feature the project was carried out on a large scale civil works(excavating soil, pouring reinforced concrete floor slabs, installing columns and beams of metal structures) subject to the uninterrupted functioning of existing data center modules and unhindered access to them by customers. Newly created modules were also put into operation as soon as they were ready. Accordingly, the entire complex of engineering systems was created and put into operation in stages, with the entire complex of intermediate tests and commissioning work carried out.
Technopark "Zhigulevskaya Valley" Lanit-Integration Smart Shelter/AST Modular The data center will have six computer rooms with a total area of ​​843 square meters. m and accommodates 326 racks with power from 7 to 20 kW each. A cooling system from AST Modular, which includes 44 Natural Free Cooling external air cooling modules, is deployed in four machine rooms on the second floor. The uninterruptible and guaranteed power supply system is built on Piller diesel-dynamic uninterruptible power supplies. The building is controlled using solutions built on equipment from Schneider Electric. IT equipment is assembled in a fault-tolerant configuration and placed in Smart Shelter modular rooms, which provide reliable protection of the central heating center from fire, moisture, vibration and other external influences.
Aeroflot Technoserv IT Crew There are 78 server racks with an electrical power of 10 kW each installed in the computer room. total area two machine rooms 175 sq. m, which ensures ease of transportation and maintenance of equipment. The server and engineering blocks are located vertically - in two tiers. With this configuration, the main engineering equipment is located on the first tier, and the computing infrastructure is on the second. When connecting server blocks, a single technological room is formed for installing active equipment. The engineering systems of Aeroflot's new modular data center are designed for round-the-clock uninterrupted operation - they are pre-installed, adjusted and tested.
Krasnoyarsk hydroelectric power station Utilex/Lanit-Sibir Nota As part of the project, a modular data center with internal dimensions of 10.5 * 3 * 3.2 m was installed on a site located at a safe level outside the body of the hydroelectric dam. The data center is fully equipped with all the necessary engineering systems: distributed power supply, air conditioning and humidity maintenance, video surveillance, uninterruptible power supplies, control systems, monitoring and access management, automatic fire extinguishing, fire and security alarms, as well as a structured cabling system. The technical difficulty of the project was that the work was carried out under an existing 500 kV power transmission line. The construction of the data center facility itself was limited by time constraints. During the construction of the frame structure of the center, the work fit into the tightest and most regulated schedule for disconnecting power lines.