My business is Franchises. Ratings. Success stories. Ideas. Work and education
Site search

History of the development of digital computing technology. History of the development of computer technology

PC BASICS

People have always felt the need to count. To do this, they used their fingers, pebbles, which they put in piles or placed in a row. The number of objects was recorded using lines that were drawn along the ground, using notches on sticks and knots that were tied on a rope.

With the increase in the number of objects to be counted and the development of sciences and crafts, the need arose to carry out simple calculations. The oldest instrument known in various countries, are the abacus (in Ancient Rome they were called calculi). They allow you to perform simple calculations on large numbers. The abacus turned out to be such a successful tool that it has survived from ancient times almost to the present day.

No one can name the exact time and place of the appearance of the bills. Historians agree that their age is several thousand years, and their homeland may be Ancient China, Ancient Egypt, and Ancient Greece.

1.1. SHORT STORY

COMPUTING EQUIPMENT DEVELOPMENTS

With the development of exact sciences, an urgent need arose to carry out a large number of precise calculations. In 1642, French mathematician Blaise Pascal constructed the first mechanical adding machine, known as Pascal's adding machine (Figure 1.1). This machine was a combination of interlocking wheels and drives. The wheels were marked with numbers from 0 to 9. When the first wheel (units) made a full revolution, the second wheel (tens) was automatically activated; when it reached the number 9, the third wheel began to rotate, etc. Pascal's machine could only add and subtract.

In 1694, the German mathematician Gottfried Wilhelm von Leibniz designed a more advanced calculating machine (Fig. 1.2). He was convinced that his invention would find wide application not only in science, but also in everyday life. Unlike Pascal's machine, Leibniz used cylinders rather than wheels and drives. The cylinders were marked with numbers. Each cylinder had nine rows of projections or teeth. In this case, the first row contained 1 protrusion, the second - 2, and so on until the ninth row, which contained 9 protrusions. The cylinders were movable and were brought into a certain position by the operator. The design of Leibniz's machine was more advanced: it was capable of performing not only addition and subtraction, but also multiplication, division and even square root extraction.

Interestingly, the descendants of this design survived until the 70s of the 20th century. in the form of mechanical calculators (Felix type adding machine) and were widely used for various calculations (Fig. 1.3). However, already in late XIX V. With the invention of the electromagnetic relay, the first electromechanical counting devices appeared. In 1887, Herman Hollerith (USA) invented an electromechanical tabulator with numbers entered using punched cards. The idea of ​​using punch cards was inspired by the punching of railway tickets with a puncher. The 80-column punched card he developed did not undergo significant changes and was used as an information carrier in the first three generations of computers. Hollerith tabulators were used during the 1st population census in Russia in 1897. The inventor himself then made a special visit to St. Petersburg. Since that time, electromechanical tabulators and other similar devices have become widely used in accounting.

At the beginning of the 19th century. Charles Babbage formulated the basic principles that should underlie the design of a fundamentally new type of computer.

In such a machine, in his opinion, there should be a “warehouse” for storing digital information, a special device that carries out operations on numbers taken from the “warehouse.” Babbage called such a device a “mill.” Another device is used to control the sequence of operations, transfer of numbers from the “warehouse” to the “mill” and back, and finally, the machine must have a device for inputting initial data and outputting calculation results. This machine was never built - only models of it existed (Fig. 1.4), but the principles underlying it were later implemented in digital computers.

Babbage's scientific ideas captivated the daughter of the famous English poet Lord Byron, Countess Ada Augusta Lovelace. She laid down the first fundamental ideas about the interaction of various blocks of a computer and the sequence of solving problems on it. Therefore, Ada Lovelace is rightfully considered the world's first programmer. Many of the concepts introduced by Ada Lovelace in the descriptions of the world's first programs are widely used by modern programmers.

Rice. 1.1. Pascal's summing machine

Rice. 1.2. Leibniz calculating machine

Rice. 1.3. Felix adding machine

Rice. 1.4. Babbage's machine

The beginning of a new era of development computer technology based on electromechanical relays became 1934 American company IBM (International Business Machines) began producing alphanumeric tabulators capable of performing multiplication operations. In the mid-30s of the XX century. based on tabulators, a prototype of the first local computer network is created. In Pittsburgh (USA), a department store installed a system consisting of 250 terminals connected by telephone lines with 20 tabulators and 15 typewriters for payments to customers. In 1934 - 1936 German engineer Konrad Zuse came up with the idea of ​​​​creating a universal computer with program control and storage of information in a memory device. He designed the Z-3 machine - it was the first program-controlled computer - the prototype of modern computers (Fig. 1.5).


Rice. 1.5. Zuse computer

It was a relay machine using a binary number system, having a memory for 64 floating point numbers. The arithmetic block used parallel arithmetic. The team included operational and address parts. Data entry was carried out using a decimal keyboard, digital output was provided, as well as automatic conversion of decimal numbers to binary and vice versa. The speed of the addition operation is three operations per second.

In the early 40s of the XX century. In the laboratories of IBM, together with scientists from Harvard University, the development of one of the most powerful electromechanical computers began. It was called MARK-1, contained 760 thousand components and weighed 5 tons (Fig. 1.6).

Rice. 1.6. Calculating machineMARK-1

The last largest project in the field of relay computing technology (CT) should be considered the RVM-1, built in 1957 in the USSR, which was quite competitive with the computers of that time for a number of tasks. However, with the advent of the vacuum tube, the days of electromechanical devices were numbered. Electronic components had great superiority in speed and reliability, which determined the future fate of electromechanical computers. The era of electronic computers has arrived.

The transition to the next stage in the development of computer technology and programming technology would be impossible without fundamental scientific research in the field of information transmission and processing. The development of information theory is associated primarily with the name of Claude Shannon. Norbert Wiener is rightfully considered the father of cybernetics, and Heinrich von Neumann is the creator of the theory of automata.

The concept of cybernetics was born from the synthesis of many scientific directions: firstly, as a general approach to the description and analysis of the actions of living organisms and computers or other automata; secondly, from the analogies between the behavior of communities of living organisms and human society and the possibility of their description using a general theory of control; and finally from the synthesis of information transmission theory and statistical physics, which led to the most important discovery, connecting the amount of information and negative entropy in the system. The term “cybernetics” itself comes from the Greek word meaning “helmsman”; it was first used by N. Wiener in the modern sense in 1947. N. Wiener’s book, in which he formulated the basic principles of cybernetics, is called “Cybernetics or control and communication in animal and car."

Claude Shannon is an American engineer and mathematician, the man who is called the father of modern information theory. He proved that the operation of switches and relays in electrical circuits can be represented using algebra, invented in the mid-19th century. English mathematician George Boole. Since then, Boolean algebra has become the basis for analyzing the logical structure of systems of any level of complexity.

Shannon proved that any noisy communication channel is characterized by a limiting speed of information transmission, called the Shannon limit. At transmission speeds above this limit, errors in the transmitted information are inevitable. However, using appropriate information encoding methods, it is possible to obtain an arbitrarily small error probability for any noisy channel. His research formed the basis for the development of information transmission systems over communication lines.

In 1946, the brilliant American mathematician of Hungarian origin, Heinrich von Neumann, formulated the basic concept of storing computer instructions in its own internal memory, which served as a huge impetus to the development of electronic computing technology.

During World War II, he served as a consultant at the Los Alamos Atomic Center, where he worked on calculations for the explosive detonation of a nuclear bomb and participated in the development of the hydrogen bomb.

Neumann owns works related to the logical organization of computers, problems of the functioning of computer memory, self-reproducing systems, etc. He took part in the creation of the first electronic computer ENIAC, the computer architecture he proposed was the basis for all subsequent models and is still called that - "von Neumann"

I generation of computers. In 1946, work was completed in the USA to create ENIAC, the first computer using electronic components (Fig. 1.7).

Rice. 1.7. First computerENIAC

The new machine had impressive parameters: it used 18 thousand electronic tubes, it occupied a room with an area of ​​300 m 2, had a mass of 30 tons, and energy consumption was 150 kW. The machine operated at a clock frequency of 100 kHz and performed an addition operation in 0.2 ms and a multiplication in 2.8 ms, which was three orders of magnitude faster than relay machines could do. The shortcomings of the new car were quickly revealed. In its structure, the ENIAC computer resembled mechanical computers: the decimal system was used; the program was typed manually on 40 typesetting fields; It took weeks to reconfigure the switching fields. During trial operation, it turned out that the reliability of this machine is very low: troubleshooting took up to several days. Punched tapes and punched cards, magnetic tapes and printing devices were used to input and output data. The first generation computers implemented the concept of a stored program. First generation computers were used for weather forecasting, solving energy problems, military problems and in other important areas.

II generation of computers. One of the most important advances that led to the revolution in computer design and ultimately the creation of personal computers was the invention of the transistor in 1948. The transistor, which is a solid-state electronic switching element (gate), takes up much less space and consumes much less power , doing the same job as a lamp. Computing systems built on transistors were much more compact, more economical and much more efficient than tube ones. The transition to transistors marked the beginning of miniaturization, which made possible the emergence of modern personal computers (as well as other radio devices - radios, tape recorders, televisions, etc.). For generation II machines, the task of automating programming arose, as the gap between the time for developing programs and the calculation time itself increased. The second stage in the development of computer technology in the late 50s - early 60s of the XX century. characterized by the creation of developed programming languages ​​(Algol, Fortran, Cobol) and the mastery of the process of automating the management of the flow of tasks using the computer itself, i.e. development of operating systems.

The first device designed to make counting easier was the abacus. With the help of abacus dominoes it was possible to perform addition and subtraction operations and simple multiplications.

1642 - French mathematician Blaise Pascal designed the first mechanical adding machine, the Pascalina, which could mechanically perform the addition of numbers.

1673 - Gottfried Wilhelm Leibniz designed an adding machine that could mechanically perform the four arithmetic operations.

First half of the 19th century - English mathematician Charles Babbage tried to build a universal computing device, that is, a computer. Babbage called it the Analytical Engine. He determined that a computer must contain memory and be controlled by a program. According to Babbage, a computer is a mechanical device for which programs are set using punched cards - cards made of thick paper with information printed using holes (at that time they were already widely used in looms).

1941 - German engineer Konrad Zuse built a small computer based on several electromechanical relays.

1943 - in the USA, at one of the IBM enterprises, Howard Aiken created a computer called “Mark-1”. It allowed calculations to be carried out hundreds of times faster than by hand (using an adding machine) and was used for military calculations. It used a combination of electrical signals and mechanical drives. "Mark-1" had dimensions: 15 * 2-5 m and contained 750,000 parts. The machine was capable of multiplying two 32-bit numbers in 4 seconds.

1943 - in the USA, a group of specialists led by John Mauchly and Prosper Eckert began to construct the ENIAC computer based on vacuum tubes.

1945 - mathematician John von Neumann was brought in to work on ENIAC and prepared a report on this computer. In his report, von Neumann formulated general principles functioning of computers, i.e. universal computing devices. To this day, the vast majority of computers are made in accordance with the principles laid down by John von Neumann.

1947 - Eckert and Mauchly began development of the first electronic serial machine UNIVAC (Universal Automatic Computer). The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The synchronous, sequential computer UNIVAC-1 was created on the basis of the ENIAC and EDVAC computers. It operated with a clock frequency of 2.25 MHz and contained about 5,000 vacuum tubes. The internal storage capacity of 1000 12-bit decimal numbers was implemented on 100 mercury delay lines.

1949 - English researcher Mornes Wilkes built the first computer, which embodied von Neumann's principles.

1951 - J. Forrester published an article on the use of magnetic cores for storing digital information. The Whirlwind-1 machine was the first to use magnetic core memory. It consisted of 2 cubes with 32-32-17 cores, which provided storage of 2048 words for 16-bit binary numbers with one parity bit.

1952 - IBM released its first industrial electronic computer, the IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 diodes. An improved version of the IBM 704 was different high speed work, it used index registers and represented data in floating point form.

After the IBM 704 computer, the IBM 709 was released, which in architectural terms was close to the machines of the second and third generations. In this machine, indirect addressing was used for the first time and input-output channels appeared for the first time.

1952 - Remington Rand released the UNIVAC-t 103 computer, which was the first to use software interrupts. Remington Rand employees used an algebraic form of writing algorithms called “Short Code” (the first interpreter, created in 1949 by John Mauchly).

1956 - IBM developed floating magnetic heads on an air cushion. Their invention made it possible to create a new type of memory - disk storage devices (SD), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk storage devices appeared in IBM 305 and RAMAC machines. The latter had a package consisting of 50 metal disks with a magnetic coating, which rotated at a speed of 12,000 rpm. /min. The surface of the disk contained 100 tracks for recording data, each containing 10,000 characters.

1956 - Ferranti released the Pegasus computer, in which the concept of general purpose registers (GPR) was first implemented. With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had at his disposal not one, but several accumulator registers.

1957 - a group led by D. Backus completed work on the first programming language high level, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to expanding the scope of computers.

1960s - 2nd generation of computers, computer logic elements are implemented on the basis of semiconductor transistor devices, algorithmic programming languages ​​such as Algol, Pascal and others are being developed.

1970s - 3rd generation of computers, integrated circuits containing on one semiconductor wafer thousands of transistors. OS and structured programming languages ​​began to be created.

1974 - several companies announced the creation of a personal computer based on the Intel-8008 microprocessor - a device that performs the same functions as a large computer, but is designed for one user.

1975 - the first commercially distributed personal computer Altair-8800 based on the Intel-8080 microprocessor appeared. This computer had only 256 bytes of RAM, and there was no keyboard or screen.

Late 1975 - Paul Allen and Bill Gates (future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to simply communicate with the computer and easily write programs for it.

August 1981 - IBM introduced the IBM PC personal computer. The main microprocessor of the computer was a 16-bit Intel-8088 microprocessor, which allowed working with 1 megabyte of memory.

1980s - 4th generation of computers built on large integrated circuits. Microprocessors are implemented as a single chip, Mass production personal computers.

1990s — 5th generation of computers, ultra-large integrated circuits. Processors contain millions of transistors. The emergence of global computer networks mass use.

2000s — 6th generation of computers. Integration of computers and household appliances, embedded computers, development of network computing.

The computer they created worked a thousand times faster than the Mark 1. But it turned out that most For a long time, this computer was idle, because to set the calculation method (program) in this computer it was necessary to connect the wires in the required way for several hours or even several days. And the calculation itself could then take only a few minutes or even seconds.

To simplify and speed up the process of setting programs, Mauchly and Eckert began to design a new computer that could store the program in its memory. In 1945, the famous mathematician John von Neumann was brought in to work and prepared a report on this computer. The report was sent to many scientists and became widely known because in it von Neumann clearly and simply formulated the general principles of the functioning of computers, that is, universal computing devices. And to this day, the vast majority of computers are made in accordance with the principles that John von Neumann outlined in his report in 1945. The first computer to embody von Neumann's principles was built in 1949 by the English researcher Maurice Wilkes.

The development of the first electronic serial machine UNIVAC (Universal Automatic Computer) began around 1947 by Eckert and Mauchli, who founded the ECKERT-MAUCHLI company in December of the same year. The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The synchronous, sequential computer UNIVAC-1 was created on the basis of the ENIAC and EDVAC computers. It operated with a clock frequency of 2.25 MHz and contained about 5000 vacuum tubes. The internal storage device with a capacity of 1000 12-bit decimal numbers was implemented on 100 mercury delay lines.

Soon after the UNIVAC-1 machine was put into operation, its developers came up with the idea of ​​automatic programming. It boiled down to ensuring that the machine itself could prepare the sequence of commands needed to solve a given problem.

A strong limiting factor in the work of computer designers in the early 1950s was the lack of high-speed memory. According to one of the pioneers of computing, D. Eckert, “the architecture of a machine is determined by memory.” The researchers focused their efforts on the memory properties of ferrite rings strung on wire matrices.

In 1951, J. Forrester published an article on the use of magnetic cores for storing digital information. The Whirlwind-1 machine was the first to use magnetic core memory. It consisted of 2 cubes 32 x 32 x 17 with cores that provided storage of 2048 words for 16-bit binary numbers with one parity bit.

Soon, IBM became involved in the development of electronic computers. In 1952, it released its first industrial electronic computer, the IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 germanium diodes. An improved version of the IBM 704 machine was distinguished by its high speed, it used index registers and represented data in floating point form.

IBM 704
After the IBM 704 computer, the IBM 709 was released, which, in architectural terms, was close to the machines of the second and third generations. In this machine, indirect addressing was used for the first time and I/O channels appeared for the first time.

In 1956, IBM developed floating magnetic heads on an air cushion. Their invention made it possible to create a new type of memory - disk storage devices (SD), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk storage devices appeared in IBM 305 and RAMAC machines. The latter had a package consisting of 50 magnetically coated metal disks that rotated at a speed of 12,000 rpm. The surface of the disk contained 100 tracks for recording data, each containing 10,000 characters.

Following the first production computer UNIVAC-1, Remington-Rand in 1952 released the UNIVAC-1103 computer, which worked 50 times faster. Later, software interrupts were used for the first time in the UNIVAC-1103 computer.

Rernington-Rand employees used an algebraic form of writing algorithms called “Short Code” (the first interpreter, created in 1949 by John Mauchly). In addition, it is necessary to note the US Navy officer and leader of the programming team, then captain (later the only female admiral in the Navy) Grace Hopper, who developed the first compiler program. By the way, the term “compiler” was first introduced by G. Hopper in 1951. This compiling program translated into machine language the entire program, written in an algebraic form convenient for processing. G. Hopper is also the author of the term “bug” as applied to computers. Once, a beetle (in English - bug) flew into the laboratory through an open window, which, sitting on the contacts, shorted them, causing a serious malfunction in the operation of the machine. The burnt beetle was glued to the administrative log, where various malfunctions were recorded. This is how the first bug in computers was documented.

IBM took the first steps in the field of programming automation by creating the “Fast Coding System” for the IBM 701 machine in 1953. In the USSR, A. A. Lyapunov proposed one of the first programming languages. In 1957, a group led by D. Backus completed work on the first high-level programming language, which later became popular, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to expanding the scope of computers.

Alexey Andreevich Lyapunov
In Great Britain in July 1951, at a conference at the University of Manchester, M. Wilkes presented a report “The best design method automatic machine", which became a pioneering work on the fundamentals of microprogramming. The method he proposed for designing control devices has found wide application.

M. Wilkes realized his idea of ​​microprogramming in 1957 when creating the EDSAC-2 machine. In 1951, M. Wilkes, together with D. Wheeler and S. Gill, wrote the first programming textbook, “Composing Programs for Electronic Computing Machines.”

In 1956, Ferranti released the Pegasus computer, which for the first time implemented the concept of general purpose registers (GPR). With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had not one, but several accumulator registers at his disposal.

The advent of personal computers

Microprocessors were first used in a variety of specialized devices, such as calculators. But in 1974, several companies announced the creation of a personal computer based on the Intel-8008 microprocessor, that is, a device that performs the same functions as a large computer, but is designed for one user. At the beginning of 1975, the first commercially distributed personal computer, Altair-8800, based on the Intel-8080 microprocessor, appeared. This computer sold for about $500. And although its capabilities were very limited (RAM was only 256 bytes, there was no keyboard and screen), its appearance was greeted with great enthusiasm: several thousand sets of the machine were sold in the first months. Buyers supplied this computer with additional devices: a monitor for displaying information, a keyboard, memory expansion units, etc. Soon these devices began to be produced by other companies. At the end of 1975, Paul Allen and Bill Gates (future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to easily communicate with the computer and easily write programs for it. This also contributed to the rise in popularity of personal computers.

The success of Altair-8800 forced many companies to also start producing personal computers. Personal computers began to be sold fully equipped, with a keyboard and monitor; the demand for them amounted to tens and then hundreds of thousands of units per year. Several magazines dedicated to personal computers appeared. Numerous useful programs contributed greatly to sales growth practical significance. Commercially distributed programs also appeared, for example the text editing program WordStar and the spreadsheet processor VisiCalc (1978 and 1979, respectively). These and many other programs made the purchase of personal computers very profitable for business: with their help, it became possible to perform accounting calculations, draw up documents, etc. Using large computers for these purposes was too expensive.

In the late 1970s, the spread of personal computers even led to a slight decline in demand for large computers and minicomputers (minicomputers). This became a matter of serious concern for IBM, the leading company in the production of large computers, and in 1979 IBM decided to try its hand at the personal computer market. However, the company's management underestimated the future importance of this market and viewed the creation of a personal computer as just a minor experiment - something like one of dozens of works carried out at the company to create new equipment. In order not to spend too much money on this experiment, the company's management gave the unit responsible for this project freedom unprecedented in the company. In particular, he was allowed not to design a personal computer from scratch, but to use blocks made by other companies. And this unit took full advantage of the given chance.

The then latest 16-bit microprocessor Intel-8088 was chosen as the main microprocessor of the computer. Its use made it possible to significantly increase the potential capabilities of the computer, since the new microprocessor allowed working with 1 megabyte of memory, and all computers available at that time were limited to 64 kilobytes.

In August 1981, a new computer called the IBM PC was officially introduced to the public, and soon after it gained great popularity among users. A couple of years later, the IBM PC took a leading position in the market, displacing 8-bit computer models.

IBM PC
The secret of the popularity of the IBM PC is that IBM did not make its computer a single one-piece device and did not protect its design with patents. Instead, she assembled the computer from independently manufactured parts and did not keep the specifications of those parts and how they were connected a secret. In contrast, the design principles of the IBM PC were available to everyone. This approach, called the open architecture principle, made the IBM PC a stunning success, although it prevented IBM from sharing the benefits of its success. Here's how the openness of the IBM PC architecture influenced the development of personal computers.

The promise and popularity of the IBM PC made the production of various components and additional devices for the IBM PC very attractive. Competition between manufacturers has led to cheaper components and devices. Very soon, many companies ceased to be content with the role of manufacturers of components for the IBM PC and began to assemble their own computers compatible with the IBM PC. Since these companies did not need to bear IBM's huge costs for research and maintaining the structure of a huge company, they were able to sell their computers much cheaper (sometimes 2-3 times) than similar IBM computers.

Computers compatible with the IBM PC were initially contemptuously called “clones,” but this nickname did not catch on, as many manufacturers of IBM PC-compatible computers began to implement technical advances faster than IBM itself. Users were able to independently upgrade their computers and equip them with additional devices from hundreds of different manufacturers.

Personal computers of the future

The basis of computers of the future will not be silicon transistors, where information is transmitted by electrons, but optical systems. The information carrier will be photons, since they are lighter and faster than electrons. As a result, the computer will become cheaper and more compact. But the most important thing is that optoelectronic computing is much faster than what is used today, so the computer will be much more powerful.

The PC will be small in size and have the power of modern supercomputers. The PC will become a repository of information covering all aspects of our Everyday life, it will not be tied to electrical networks. This PC will be protected from thieves thanks to a biometric scanner that will recognize its owner by fingerprint.

The main way to communicate with the computer will be voice. The desktop computer will turn into a “candy bar”, or rather, into a giant computer screen - an interactive photonic display. There is no need for a keyboard, since all actions can be performed with the touch of a finger. But for those who prefer a keyboard, a virtual keyboard can be created on the screen at any time and removed when it is no longer needed.

The computer will become operating system at home, and the house will begin to respond to the needs of the owner, will know his preferences (make coffee at 7 o’clock, play your favorite music, record the desired TV show, adjust temperature and humidity, etc.)

Screen size will not play any role in the computers of the future. It can be as big as your desktop or small. Larger versions of computer screens will be based on photonically excited liquid crystals, which will have much lower power consumption than today's LCD monitors. Colors will be vibrant and images will be accurate (plasma displays possible). In fact, today's concept of "resolution" will be greatly atrophied.


The need for devices to speed up the counting process appeared in humans thousands of years ago. Back then, simple means were used for this, such as counting sticks. Later, the abacus appeared, better known to us as the abacus. It allowed only the simplest arithmetic operations to be performed. A lot has changed since then. Almost every home has a computer and a smartphone in their pocket. All this can be combined under the general name “Computer technology” or “Computer technology”. In this article you will learn a little more about the history of its development.

1623 Wilhelm Schickard thinks: “Why don’t I invent the first adding machine?” And he invents it. He produces a mechanical device capable of performing basic arithmetic operations (addition, multiplication, division and subtraction) and works with the help of gears and cylinders.

1703 Gottfried Wilhelm Leibniz describes the binary number system in his treatise “Explication de l’Arithmtique Binaire”, which is translated into Russian as “Explanation of Binary Arithmetic”. The implementation of computers using it is much simpler, and Leibniz himself knew about this. Back in 1679, he created a drawing of a binary computer. But in practice, the first such device appeared only in the middle of the 20th century.

1804 Punched cards (punched cards) appeared for the first time. Their use continued into the 1970s. They are sheets of thin cardboard with holes in some places. Information was recorded by various sequences of these holes.

1820 Charles Xavier Thomas (yes, almost like Professor X) releases the Thomas Adding Machine, which went down in history as the first mass-produced counting device.

1835 Charles Babbage wants to invent his own analytical engine and describes it. Initially, the purpose of the device was to calculate logarithmic tables with high accuracy, but Babbage later changed his mind. Now his dream was a general purpose car. At that time, the creation of such a device was quite possible, but working with Babbage turned out to be difficult because of his character. As a result of disagreements, the project was closed.

1845 Israel Staffel creates the first ever device capable of extracting square roots from numbers.

1905 Percy Ludgert publishes a design for a programmable mechanical computer.

1936 Konrad Zuse decides to create his own computer. He calls it Z1.

1941 Konrad Zuse releases the Z3, the world's first software-controlled computer. Subsequently, several dozen more Z series devices were released.

1961 Launch of ANITA Mark VII, the world's first fully electronic calculator.

A few words about computer generations.

1st generation. These are so-called tube computers. They work using vacuum tubes. The first such device was created in the middle of the 20th century.

2nd generation. Everyone used 1st generation computers, until suddenly in 1947 Walter Brattain and John Bardeen invented a very important thing - the transistor. This is how the second generation of computers appeared. They consumed much less energy and were more productive. These devices were common in the 50s and 60s of the XX century, until the integrated circuit was invented in 1958.

3rd generation. The operation of these computers was based on integrated circuits. Each such circuit contains hundreds of millions of transistors. However, the creation of the third generation did not stop the production of second-generation computers.

4th generation. In 1969, Ted Hoff came up with the idea of ​​replacing many integrated circuits with one small device. It was later called a microcircuit. Thanks to this, it became possible to create very small microcomputers. The first such device was released by Intel. And in the 80s, microprocessors and microcomputers turned out to be the most common. We still use them now.

It was Short story development of computer technologies and computer technology. I hope I managed to interest you. Goodbye!

Did you know, What is a thought experiment, gedanken experiment?
This is a non-existent practice, an otherworldly experience, an imagination of something that does not actually exist. Thought experiments are like waking dreams. They give birth to monsters. Unlike a physical experiment, which is an experimental test of hypotheses, a “thought experiment” magically replaces experimental testing with desired conclusions that have not been tested in practice, manipulating logical constructions that actually violate logic itself by using unproven premises as proven ones, that is, by substitution. Thus, the main task of the applicants of “thought experiments” is to deceive the listener or reader by replacing a real physical experiment with its “doll” - fictitious reasoning on parole without the physical verification itself.
Filling physics with imaginary, “thought experiments” has led to the emergence of an absurd, surreal, confused picture of the world. A real researcher must distinguish such “candy wrappers” from real values.

Relativists and positivists argue that “thought experiments” are a very useful tool for testing theories (also arising in our minds) for consistency. In this they deceive people, since any verification can only be carried out by a source independent of the object of verification. The applicant of the hypothesis himself cannot be a test of his own statement, since the reason for this statement itself is the absence of contradictions in the statement visible to the applicant.

We see this in the example of SRT and GTR, which have turned into a unique type of religion that governs science and public opinion. No amount of facts that contradict them can overcome Einstein’s formula: “If a fact does not correspond to the theory, change the fact” (In another version, “Does the fact not correspond to the theory? - So much the worse for the fact”).

The maximum that a “thought experiment” can claim is only the internal consistency of the hypothesis within the framework of the applicant’s own, often by no means true, logic. This does not check compliance with practice. Real verification can only take place in an actual physical experiment.

An experiment is an experiment because it is not a refinement of thought, but a test of thought. A thought that is self-consistent cannot verify itself. This was proven by Kurt Gödel.