Understanding The Cloud

In recent years, the IT industry has been enthusiastic about the cloud. Major IT companies and consulting firms have invested billions of dollars, pounds and yen in cloud technology. So what’s the problem?

Although the cloud emits much more heat than light, it still gives us reason to think about what to sell to our customers. In a sense, the cloud is nothing new, in others – it is revolutionary and will undoubtedly change the way applications and services are delivered to users.

In addition, and this is already happening, users will finally be able to provide their own processing, memory, storage and network (PMSN) resources at one level, as well as receive applications and services at other levels. Anywhere, anytime with (almost) any mobile. Technology. In short, the cloud can free users, make remote work more feasible, simplify IT management, and shift business capital to more operational. When a business receives applications and services from the cloud, depending on the type of cloud, it no longer needs a data center or server room. This is enough to cover the costs of the apps and services it uses. Some in IT may view this as a threat, others as an exemption.

So what is a cloud?

To understand the cloud, you need to understand the basic technologies, principles and drivers that support it and give a big boost to its development.

Virtualization

Over the past decade, the industry has been busy consolidating data centers and server rooms from tin racks to racks with fewer tin cans with fewer tin cans. At the same time, the number of applications that may exist in this new and smaller space is increasing.

Virtualization; Why would you do that?

About 15% of the servers on which the app is placed are filled. This means that the server is working and is not being seriously used. The cost of data centers with 15% of servers working is a financial nightmare. 15% of server use over the years can recoup the initial investment. The servers have a lifespan of about 3 years and their initial wear and tear is about 50%. Three years later, the servers are all worth it.

Today, we have complex tool kits that allow you to virtualize almost any server, creating clusters of virtualized servers that can accommodate multiple applications and services. It has given many advantages. A higher density of application servers hosted on fewer resource servers allows the data center to provide more applications and services.

It’s fresher, greener

In addition to reducing the number of individual hardware systems through the rapid introduction of virtualization, data center developers and hardware manufacturers have introduced other methods and technologies to reduce the amount of energy needed to cool systems and devices. Today’s servers and other hardware systems are aimed at airflow. The server can be fitted with fans forward or backward that direct heated air in a specific direction suitable for the design of the data center’s airflow. Airflow is a new science in the computer industry. In the lobby of the data center, there is an increasingly frequent “hot island” and a “cold island” matrix. Having systems capable of responding to and participating in this design can lead to significant energy savings. The choice of building a data center is also becoming increasingly important.

There is also a Green Agenda. The business community wants to be seen as participants in this new and popular movement. The energy needed to operate large data centers is in the megawatt range and hardly meets environmental requirements. Large data centers will always require a lot of energy. Equipment manufacturers are trying to reduce the power requirements of their products, and data center designers are doing their best to make better use of (natural) airflow. In general, these efforts are beneficial. If green saves money, that’s a good thing.

Negative

Intensive use of equipment leads to an increase in the number of failures, mainly due to heating. In the case of a factor of 121, the server is idle, cooled down and underutilized, and will cost more than it should (in terms of ROI), but will provide a long life cycle. In the case of virtualization, each host’s higher load will generate a lot more heat.

Another drawback of virtualization is the density of virtual machines. Imagine 500 hardware servers, each with 192 virtual machines. This amounts to 96,000 virtual machines. The average number of virtual machines per host is limited to the number of virtual machines per processor recommended by the vendor. If there are 16 processors (cores) on the server, you can create about 12 virtual machines per core (it depends entirely on the use of a virtual machine). Therefore, it’s a simple arithmetic part: 500 x 192 x 96,000 virtual machines. Architects take this into account when designing large virtualization infrastructures and provide strict control over growth. However, there is a danger.

Virtualization; The basics of how to do it

Take one computer, server, and install software that allows you to abstract underlying hardware resources: processing, memory, storage, and network. Once you’ve installed this virtualization-compatible software, you can use it to make different operating systems believe they’re installed in a secure environment that they recognize. This is achieved through virtualization software, which (must) contain all the necessary drivers used by the operating system to communicate with the hardware.

There is a hardware host at the bottom of the virtualization stack. Set a hypervisor on this machine. The hypervisor extracts hardware resources and delivers them to virtual machines (VMs). Install the appropriate operating system on a virtual machine. Now install the app (me). A single hardware host can support multiple guest operating systems or virtual machines, depending on the purpose of the virtual machine and the number of processor cores on the host. Each hypervisor vendor has its own switching ratio between virtual machines and cores, but you also need to understand exactly what virtual machines will support to calculate the selection of virtual machines. Identifying the size and providing virtual infrastructure is a new dark art of IT, and there are many tools and tools to do this important and critical task. Despite all the graceful tricks, the art of calibration is still based on literate guesses and experience. That means the cars haven’t been taken yet!

Leave a Reply