My business is Franchises. Ratings. Success stories. Ideas. Work and education
Site search

Artificial intelligence in economics presentation. Artificial intelligence

























1 of 24

Presentation on the topic: Artificial intelligence

slide number 1

Description of the slide:

Artificial intelligence Intellectus (from Latin knowledge, understanding, reason) is the ability of thinking, rational knowledge. The subject of study of the science of "artificial intelligence" is human thinking. Scientists are looking for an answer to the question: how does a person think? The purpose of these studies is to create a model of human intelligence and implement it on a computer. (In other words: teach a machine to think).

slide number 2

Description of the slide:

slide number 3

Description of the slide:

Artificial intelligence - the main function The fifties witnessed the appearance on the horizon of post-war science of a supernova - Cybernetics, its rapid rise and equally rapid disintegration into parts, one of which is associated with the birth of artificial intelligence (AI). And although a variety of hopes were associated (and continue to be associated) with the catchy name of the newborn, it soon became clear that no matter how broadly you interpret this area, the apparatus for representing and processing knowledge should become its core.

slide number 4

Description of the slide:

At the same time, the most ambitious apologists believe that the goal of artificial intelligence is the formation of a metaknowledge apparatus capable of uniting philosophy, psychology, mathematics and spreading “ new order” symbiosis of man and computer for all sciences, activities and even art. Thus, it turned out that the main task of AI - the development of formal means of representing and processing knowledge - is very close to the function of mathematics itself.

slide number 5

Description of the slide:

However, there is a rather significant difference in their methodological positions: while dealing with the theory and development of formal apparatuses, mathematics only on the periphery pays attention to the application of these apparatuses to the problems of other disciplines; the methodology of artificial intelligence is characterized by the opposite direction - from the study various forms knowledge to the development of a set of formal tools, ideally covering the entire spectrum of areas of activity.

slide number 6

Description of the slide:

slide number 7

Description of the slide:

There are many types human activity which cannot be planned in advance. Composition of music and poetry, proof of the theorem, literary translation from foreign language, diagnosis and treatment of disease, and much more... For example, when playing chess, a chess player knows the rules of the game and has the goal of winning the game. His actions are not pre-programmed. They depend on the actions of the opponent, on the emerging position on the board, on quick wits and personal experience chess player.

slide number 8

Description of the slide:

slide number 9

Description of the slide:

slide number 10

Description of the slide:

slide number 11

Description of the slide:

Any artificial intelligence system works within a specific subject area (medical diagnostics, legislation, mathematics, economics, etc.). Like a specialist, a computer must have knowledge in this area. Knowledge in a particular subject area, formalized in a certain way and stored in memory computers are called computer knowledge base.

slide number 12

Description of the slide:

For example, you want to use a computer to solve problems in geometry. The problem book contains 500 tasks of different content. An artificial intelligence specialist will put the knowledge of geometry into the computer (it is assumed that this is how the teacher's knowledge is laid in you). Based on this knowledge and with the help of a special algorithm of logical reasoning, the computer will solve any of the 500 problems. To do this, it is enough to tell him only the condition of the problem. Artificial intelligence systems work on the basis of the knowledge bases embedded in them.

slide number 13

Description of the slide:

How to create an intelligent system on a computer? Human thinking is based on two components: the stock of knowledge and the ability to reason logically. Hence, there are two main tasks in creating intelligent systems on a computer: knowledge modeling (development of knowledge formalization methods for entering them into computer memory as a knowledge base); reasoning modeling (creation of computer programs that imitate the logic of human thinking when solving various problems).

slide number 14

Description of the slide:

One of the types of artificial intelligence systems is expert systems. The purpose of expert systems is to consult the user, help in making decisions. Such assistance becomes especially important in extreme situations, for example, in conditions of a technical accident, an emergency operation, when driving vehicles. The computer is not subject to stress. He will quickly find the optimal, safe solution and offer it to the person.

slide number 15

Description of the slide:

For those who are interested: Artificial intelligence is the main function Knowledge modeling Fuzzy mathematics Information technology- change of epochs "Non-algorithmic" control... Tasks for specialists of the highest classComputer NOT von Neumann architecture

slide number 16

Description of the slide:

slide number 17

Description of the slide:

The central task of AI - the creation of a knowledge apparatus (AZ) - almost immediately required clarification - and about what, in fact, knowledge in question? Speaking of exact, formal ones, then these territories already have a mistress - Mathematics, with a professional army, with which the conquistadors of the new lands had no desire to get involved. If informal knowledge is meant, then it can be classified as: sufficiently studied and specific, but (so far) poorly formalized - for example, natural language syntax or medical diagnostics, and poorly formalized in principle, that is, the main part of the concepts of all areas activities - from the humanities to art and domestic spheres of life.

Description of the slide:

This almost hopeless situation was saved by L. Zadeh, who proposed in the mid-60s the concept of a linguistic variable and the apparatus of fuzzy mathematics. Artificial intelligence received a real magic wand as a gift - it quickly became clear that the desert of solid white spots on the knowledge map can be easily turned into fuzzy (and, alas, only virtually) flowering fields.

slide number 20

Description of the slide:

Fuzzy-Morgana quickly seized the masses: by the beginning of the 80s, the fuzzy bibliography included about twenty thousand titles, the number of which has certainly increased since then by no less than two or three times. In the whirlpool of enthusiasm, a certain innate defect of the new universal tool went unnoticed - the semantics and pragmatics of the fuzzy apparatus were themselves quite fuzzy from the very beginning: what remained blurred was WHAT, in fact, fuzziness represents, WHAT it operates and WHY exactly THAT way, and not otherwise. The vagueness of the apparatus inevitably led to a complete ambiguity of the results of its application, which was not noticed simply because it remained unclear how, in fact, to check these results.

slide number 21

Description of the slide:

slide number 22

Description of the slide:

Although imperative (algorithmic) control from the very beginning was the basis of programming for computers of the von Neumann architecture, in the late 60s and early 70s there were attempts to develop alternative ways organization of the computing process. First of all, this was due to research on AI and parallel programming for multiprocessor systems. However, qualitative progress in solving this problem was provided by the apparatus of underdetermined models and recent work in the field of programming in constraints, since they are built on a decentralized, asynchronous, maximally parallel data-driven computing process. As the next step in this revolution, a transition to event-based management is possible, which significantly increases the level of associative apparatus that organizes the process of managing data.

slide number 23

Description of the slide:

Parallelism Unsolvability - the problem of parallelization of imperative software technologies has formed an insurmountable barrier to the widespread use of multiprocessor systems. Over the past 15 years, software and hardware have changed places: the level of automation of hardware design and the cost of the element base for many years have allowed mass production of computers with any number of processors, however, adaptation of modern computers for them and the development of new software products remains a problem solved only by specialists of the highest class, and then only in some special cases. In the new IT paradigm, concurrency is no longer a problem, but a natural feature of any software system.

slide number 24

Description of the slide:

The computer is NOT von Neumann architecture. Data-driven (and event-based control in the future) radically changes the very organization of the computing process, making it asynchronous, decentralized and independent of the number of processors. A fundamental restructuring of the familiar von Neumann architecture of modern machines will be required. Thus, there is a prospect of not just a change of generations, but a change of eras, leading to a real revolution - a shock to the “unshakable foundations” of IT: Algorithm, von Neumann architecture, a deterministic and sequential process go down in history forever, giving way to the Model, multi-agency and associatively self-organizing non-deterministic parallel process.

Slide 2: What is artificial intelligence?

Since the invention of computers, their ability to perform various tasks has continued to grow exponentially. People develop power computer systems, increasing the execution of tasks and reducing the size of computers. The main goal of researchers in the field of artificial intelligence is to create computers or machines as intelligent as a person.

slide 3

The author of the term "artificial intelligence" is John McCarthy, the inventor of the Lisp language, the founder of functional programming and the winner of the Turing Award for his great contribution to the field of artificial intelligence research. Artificial intelligence is a way to make a computer, computer-controlled robot or program capable of thinking intelligently like a human as well. Research in the field of AI is carried out by studying the mental abilities of a person, and then the results of this research are used as the basis for the development of intelligent programs and systems.

Slide 4: Philosophy of both artificial and intellect

During the operation of powerful computer systems, everyone asked the question: “Can a machine think and behave in the same way as a person? ". Thus, the development of artificial intelligence also began with the intention to create a similar intelligence in machines, similar to the human one.

Slide 5: The main goals of AI

Creation of expert systems - systems that demonstrate intelligent behavior: learn, show, explain and give advice; Realization of human intelligence in machines - the creation of a machine capable of understanding, thinking, teaching and behaving like a human.

Slide 6: What contributes to the development of AI?

Artificial intelligence is a science and technology based on such disciplines as computer science, biology, psychology, linguistics, mathematics, mechanical engineering. One of the main areas of artificial intelligence is the development of computer functions related to human intelligence, such as: reasoning, learning and problem solving.

Slide 7: Program with and without AI

Programs with and without AI differ in the following properties: With AI Without AI A computer program without AI can only answer the specific questions it is programmed to answer. Can answer the universal questions it is programmed to answer. Making changes to a program results in a change in its structure An AI program can absorb new modifications by sorting highly independent pieces of information together. Therefore, you can change pieces of information from the program without affecting the structure of the program itself Modification is not quick and easy Modification is fast and easy

Slide 8: Applications with AI

AI has become dominant in various areas, such as: Games - AI plays a crucial role in strategy games such as chess, poker, tic-tac-toe, etc., where the computer is able to calculate a large number of possible solutions based on heuristic knowledge . Natural language processing is the ability to communicate with a computer that understands the natural language spoken by humans. Speech recognition - some intelligent systems are able to hear and understand the language in which a person communicates with them. They can handle various accents, slang, etc. Handwriting recognition - software reads text written on paper with a pen or on a screen with a stylus. It can recognize letter shapes and convert it into editable text. Smart robots are robots capable of performing tasks assigned by humans. They have sensors to detect physical data from the real world, such as light, heat, motion, sound, shock, and pressure. They have high performance processors, multiple sensors and huge memory. In addition, they are able to learn from their own mistakes and adapt to the new environment.

Slide 9: The history of AI development

Year Event 1923 Karel Capek puts on a play in London called "Universal Robots", the first use of the word "robot" in English. 1943 Foundations for neural networks. 1945 Isaac Asimov, a graduate of Columbia University, coined the term robotics. 1950 Alan Turing develops the Turing test for intelligence. Claude Shannon publishes a detailed analysis of the intellectual chess game. 1956 John McCarthy coined the term artificial intelligence. Demonstration of the first launch of an AI program at Carnegie Mellon University. 1958 John McCarthy invents the lisp programming language for AI. 1964 Danny Bobrov's dissertation at MIT shows that computers can understand natural language quite well. 1965 Joseph Weizenbaum at MIT develops Eliza, an interactive assistant that communicates in English.

10

Slide 10

Year Event 1969 Scientists at the Stanford Research Institute developed Sheki, a motorized robot capable of perceiving and solving some problems. 1973 A team of researchers at the University of Edinburgh built Freddy, the famous Scottish robot capable of using vision to find and assemble models. 1979 The first computer-controlled autonomous car, the Stanford Cart, was built. 1985 Harold Cohen designed and demonstrated programming, Aaron. 1997 Chess program that beats world chess champion Garry Kasparov. 2000 Interactive robotic pets become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. Robot Nomad explores remote areas of Antarctica and finds meteorites.

11

Slide 11: Examples of achievements in the field of artificial intelligence

12

slide 12

Kismet is a robot created in the late 1990s at the Massachusetts Institute of Technology by Dr. Cynthia Breazeale. The auditory, visual, and expressive systems of the robot were designed to enable it to participate in social interaction with humans and simulate human emotions and facial expressions. The name "kismet" comes from an Arabic, Turkish, Urdu, Hindi and Punjabi word meaning "fate" or sometimes "luck".

13

slide 13: virtual personal assistants

Siri, Kortana and other intelligent digital personal assistants on various platforms (iOS, Android and Windows). They help you find the useful information you ask them for using natural human language. The AI ​​in these apps collects information from your questions and uses it to better understand your speech and display results tailored to your preferences. Microsoft claims that Cortana is constantly learning about its users and will eventually be able to anticipate the needs of its customers. Virtual personal assistants process a huge amount of data from various sources in order to learn more about users and become more effective assistants in finding and processing information.

14

slide 14 video games

One example of the use of artificial intelligence that most people are probably familiar with is video games, which have been using AI for a long time. The complexity and effectiveness of AI in video games has grown exponentially over the past few decades, resulting in video game characters being able to behave in completely unpredictable ways. Video games actively use AI for their characters, who can analyze environment to search for objects and interact with them. They are able to take cover, investigate sounds, use flanking maneuvers, communicate with other characters, and so on.




15

Slide 15: One of the favorite games of horror fans - Five Nights At Freddy's

The game takes place in a pizzeria called "Freddy Fazbear's Pizza", in which the player's character acts as a night guard who must defend against animatronics that come to life at night by closing the electronic doors through which they try to enter the player's room.

16

Slide 16: Artificial intelligence cars (self-driving cars)

Autonomous cars are getting closer to reality. This year, Google announced an algorithm that can learn to drive just like a human does: through experience. The idea is that eventually the car will be able to look at the road and make decisions based on what it sees.

17

Slide 17: Product offer

Big retailers like Target and Amazon make a lot of money from their stores' ability to anticipate your needs. This ability is realized different ways: coupons, discounts, targeting advertising, etc. As you may have guessed, this is a very controversial use of AI as it makes a lot of people worry about possible privacy violations.

18

Slide 18: Fraud detection

Have you ever received a message saying that you made a purchase with your credit card, even though you didn't make any purchases? Many banks send these messages if they think your account may be being scammed and want to make sure you approve a purchase before transferring money to another company. AI is often used to observe this kind of fraud. After sufficient training, the system will be able to detect fraudulent transactions based on the signs it has learned through training.

19

Slide 19: Online customer support

Many sites now offer a customer to chat with a customer service representative while they are browsing the products on the site, but not every site actually responds with live people! In many cases, you are communicating with AI. Many of these chatbots are little more than autoresponders, but some of them are actually capable of extracting knowledge from a site and providing it to customers when they ask for it.

20

Slide 20: News portals

Did you know that AI programs are capable of writing news? AI is able to write simple stories like financial reports, sports reports, etc. Of course, such a system still needs human help, but it's just a matter of time and in the near future AI will be able to write full-fledged articles.

21

slide 21 video

To control a large number of cameras for one person is a very difficult and sometimes boring task. That is why AI computers have been developed to monitor these cameras. The monitoring algorithm takes input from CCTV cameras and determines whether there is a danger or not. If he "sees" the danger, he notifies the security staff about it.

22

Of course, these systems are quite simple compared to other intelligent systems, but at the same time they perform a rather useful task: suggest music and movies based on your interests. By observing your actions, they learn and eventually give you recommendations of what will interest you. Most of these functions depends on the person. For example, if you like "rock" and you have indicated this characteristic in your profile, then you also like other songs that include this characteristic. This is the basis of many recommendations, and although it is not a futuristic development, it makes a very Good work helps us find new music and movies.

24

Slide 24: Summarize

Artificial intelligence is an integral part of the life of the majority of the world's population. When the first model was created, everyone was shocked, they were just talking about it. Over time, the models have improved. Now the idea is relevant that someday a person will create such a smart machine that it will enslave humanity. Many films have been made on this topic (Terminator), many games have been made (Five Nights At Freddy's).

25

Last slide of the presentation: Presentation on the topic "Artificial Intelligence"

Presentation on the topic:

"Artificial intelligence"

Prepared by: Svirzhevskaya T .

Petropavlovsk


Introduction

  • The term intellect comes from the Latin intellectus - which means mind, reason, reason; human thinking ability.
  • Accordingly, artificial intelligence - AI is usually interpreted as a property automatic systems take on individual functions of a person’s intellect, for example, choose and accept optimal solutions based on previous experience and rational analysis of external influences.
  • Intelligence refers to the ability of the brain to solve (intellectual) problems by acquiring, remembering and purposefully transforming knowledge in the process of learning from experience and adapting to a variety of circumstances.

Artificial intelligence as a science has existed for more than forty years.

  • The first intellectual system is considered to be the "Logic-Theorist" program, designed for proving theorems and calculating propositions.

Her work was first demonstrated on August 9, 1956, such famous scientists as A. Newell, A. Turing, K. Shannon, J. Shaw, G. Simon and others participated in the creation of the program.


  • Since then, a great variety of computer systems have been developed in the field of artificial intelligence, which are commonly called intelligent.
  • The areas of their practical application cover almost all areas of human activity related to information processing.

Modernity

Methods and means of artificial intelligence are currently used to solve a wide range of applied problems and can improve the efficiency of the work of scientists, doctors, teachers, engineers, economists, military and many other specialists.


  • We are steadily moving towards a new information revolution, comparable in scale to the development of the Internet, whose name is artificial intelligence

  • Now on the Internet, everywhere you can find signs of the emergence of such projects, calls to unite all the scientific potential of mankind capable of thinking in order to humanize the Internet, transform it into an intelligent system or a habitat for intelligent systems.
  • Since such prerequisites exist, it means that nothing will leave the flight of human thought on the way to achieving the goal.

Technology

Startup Hanson Robotics is going to create "the smartest robot in the world." For this they collect money. The robot will be able to talk, play with toys, draw pictures and respond to emotions - it will be like a three-year-old child in everything.

Designer and engineer David Hanson brought together experts in the field of robotics and artificial intelligence. The idea of ​​creating a robot-child is interesting, first of all, because he will be able to learn from his own experience.

The goal of its creators is to create artificial intelligence that works at the human level and even at the highest level. “But to do this, we need to start by creating the simplest examples that will be minimally acceptable to human society,” he says.

This is the purpose of the study - to give a robot with artificial intelligence a body and try to introduce it into society. However, such a machine can be useful in education.

An open-source robot with the intelligence of a three-year-old child


Conclusion

  • So, artificial intelligence is a device that can perform the same mental activity that a person can perform.
  • Artificial intelligence is designed to expand the possibilities of computer science, not to define its boundaries. One important challenge facing researchers is to sustain these efforts.

For the first time, the idea of ​​creating an artificial likeness of the human mind was expressed by Raymond Lull

(1235-1315), who, back in the 14th century, tried to create a machine for solving various tasks based on a general classification of concepts.

In the 17th century Gottfried Leibniz (1646-1716) and René Descartes (1596-1650) developed this idea independently from each other, proposing universal languages ​​for the classification of all sciences.

These ideas formed the basis theoretical developments in the field of artificial intelligence.

The development of artificial intelligence after the creation of computers

The development of AI as a scientific direction became possible only after the creation of computers

This happened in the 40s of the XX century.

At the same time, Norbert Wiener (1894-1964) created his seminal work on new science- cybernetics.

Cybernetics (from the Greek - "the art of management") - the science of the general laws of the processes of management and transmission of information in various systems, be it machines, living organisms or society.

The term "artificial intelligence"

The term "artificial intelligence" (artificial intelligence) was proposed in 1956 on

seminar of the same name in

Stanford University USA.

Soon after the recognition of artificial intelligence as an independent branch of science, there was a division into two main areas: neurocybernetics and black box cybernetics.

The main idea of ​​neurocybernetics

The only thing capable of thinking is the human brain.

Therefore, any "thinking device" must somehow reproduce its structure.

Neurocybernetics is focused on hardware modeling of structures similar to the structure of the brain.

Elements similar to neurons and their combinations into functioning systems were created (neurons are brain cells interacting with each other). These systems are called neural networks.

Neural networks

The first neural networks were created in the late 50s. American scientists G. Rosenblatt and P. McCulloch. These were attempts to create systems that simulate the human eye and its interaction with the brain. The device is a perceptron.

In the 70-80s. the number of works in this direction began to decrease.

Neurocybernetics in Japan

In the mid 80s. in Japan, as part of the development of the 5th generation knowledge-based computer, the 6th generation computer, or neurocomputer, was created.

At this time, restrictions on memory and speed were practically removed.

Transputers appeared - parallel computers that interact with an unlimited number of microprocessors.

From transputers to neurocomputers - one step.

Three Modern Approaches to Building Neural Networks

Hardware - the creation of special computers, expansion cards, chipsets that implement all algorithms.

Software - the creation of programs and tools designed for high-performance computers. Neural networks are created in the computer's memory, all the work is done by its own processors.

Hybrid is a combination of the first two.

Black box cybernetics

The main idea is that it does not matter how the “thinking device” is arranged. The main thing is that it reacts to given input signals in the same way as the human brain.

This trend was focused on search for algorithms solving intellectual problems on existing models computers.