Logical-linguistic and semiotic models and representations. Main features of artificial languages ​​of logic compared to natural languages ​​Definition of logic as a science

  • 11.12.2023

LOGIC AND LINGUISTICS 2 page

has the meaning calls designatus (Augustine) denotation (B. Russell, A. Church, W. Quine) Significate (C. Morris) referent (C. Ogden, A. Richards) signified (F. Saussure) extensional (R. Carnap) meaning (G. Frege) meaning (W. Quine) intension (R. Carnap) content of the concept scope of concept

In linguistics, philosophical studies of concepts in the semantic aspect are reflected in the theory of lexical meaning (LS) of a word. At the same time, some scientists denied the connection between the concept and the lexical meaning of the word, while others identified them. The relationship between a LP and a concept can be different, since a LP can be broader than a concept and include an evaluative and a number of other components, or perhaps narrower concepts in the sense that it reflects only some features of objects, while concepts cover their deeper and more significant signs. In addition, LZ can be correlated with everyday ideas about the surrounding reality, and concepts are associated with scientific ideas about it. The combination of concept and LP is observed only in terms. LZ and concepts are opposed concepts- central objects cognitive linguistics- units of mental or psychic resources of our consciousness and the information structure that reflects human knowledge and experience, meaningful units of memory, the entire picture of the world reflected in the human psyche.

Cognitology, an interdisciplinary science, explores the cognition of cognition and the mind in all aspects of its existence and “establishes contacts” between mathematics, psychology, linguistics, artificial intelligence modeling, philosophy and computer science (an analysis of these interscientific correspondences and connections is given in detail in the work). Cognitive linguistics, in its methodological preferences, is in a certain opposition to the so-called Saussurean linguistics. However, without taking into account the results of research in cognitive linguistics, modern work on language modeling, in our opinion, loses all meaning.

According to theory A. Paivio, the system of mental representations is in a state of rest and does not function until any stimuli - verbal or non-verbal - activate it from the outside. Activation can occur at three levels of signal processing: representational (linguistic signals activate linguistic structures, non-verbal - pictures or images), referential (verbal signals activate non-verbal, non-verbal - verbal) and associative (excitation of any images in response to a word and extracted from memory the name for receiving signals is also accompanied by the excitation of various kinds of associations, both) [ibid., p. 67 - 70, 121 - 122]. Memory is a semantic “network”, the “nodes” of which are both verbal units (logogens) and non-verbal representations (imagens). Each “node” of the network - the “connectionist model of the brain” - can, if necessary, be activated, i.e., brought into an excited state, and when activating the brain, errors are not excluded, i.e., excitation of the “wrong” or “wrong” areas, or individual “nodes” turn out to be more excited than necessary, and the person is overwhelmed by a stream of unnecessary associations. It is very important to know what types of knowledge are activated in certain cases and what structures of consciousness (from single representations to such associations as frames, scenes, scenarios, etc.) they involve.

Concept architectures of cognition(“architecture of the mind”) is associated with the idea of ​​what mechanisms ensure the implementation of cognitive functions, i.e. modeling the human mind. Much in modeling is considered innate, that is, it exists as part of the human bioprogram, the rest is the result of human cognitive development processes, but what exactly is the subject of continuous debate [N. Chomsky, 1972; Tomasello, 1995]. With the spread of the modular theory of J. Fodor and N. Chomsky, the architecture of cognition is described by listing individual modules (perception, rational thinking, memory, language, etc.), and it is assumed that each module should operate a relatively small number of general principles and units. The normal operation of the modules is ensured by the mechanisms of induction, deduction, associative linking of units, etc. The model of the mind - the architecture of cognition - is represented as consisting of a huge number of interconnected neurons, packages or associations of which are in an excited, activated state during mental activity. Such network models are most justified when analyzing such a module of cognition architecture as memory.

One of the central concepts in the cognitive terminology system is also the concept associations- linking two phenomena, two ideas, two objects, etc., usually a stimulus and the accompanying reaction [Pankrats, 1996b]. Behaviorists explained all human behavior on the basis of associations: a certain stimulus is associated with a certain response: S ? R. The very ability to associate is considered innate. In cognitive psychology, special attention is paid to those processes that establish associations, their nature, their connections with the processes of induction and inference, their relationship to causal, cause-and-effect chains, etc. The establishment of associations between units began to be considered as a general principle of operation of those same modules - the simplest systems - that make up the entire infrastructure of the mind. The concept of association forms the basis of many network models of the mind, which are essentially chains of units (nodes) connected by association relationships of different types.

Access to information contained in mental lexicon, the reachability of this information in processes speech production and understanding implemented differently. Access is assigned to processes linguistic information processing and the ability to quickly gain access to the information necessary in these processes, presented in the human head in the form of certain mental representations linguistic units (words and their constituent morphemes). Since the concept of knowledge of a word includes information about its phonological structure, its morphological structure, its semantics and features of syntactic use, etc., any of this information must be at the disposal of the speaker, i.e., it must be provided in his memory access to every information about the specified characteristics. Psychological models speech activity must, accordingly, answer the question of how all of the specified information is organized in the mental lexicon [Kubryakova, 1996b], and the main questions are, first of all, questions about whether phonological, morphological and other information about words and their constituent parts are stored in separate subcomponents (modules) of the mental lexicon, or whether all information is “recorded” with individual words, and also what is the information stored with each individual word or occurrence of each individual lexical unit, how can one imagine the mental representation of an individual word or an individual feature of a word, whether access is made during speech activity to words as a whole or to their parts (morphemes), etc. [ibid.].

The concept of access is an important part of models of lexical information processing. Access mechanisms are closely related to the form in which the organization of the lexicon and its components such as mental representations of various kinds are described in the corresponding models.

Concepts - units of the mental lexicon - arise in the process of constructing information about objects and their properties, and this information can include both information about the real state of affairs in the world, and information about imaginary worlds and the possible state of affairs in these worlds. This is information about what an individual knows, assumes, thinks, imagines about the objects of the world. Sometimes concepts are identified with everyday concepts. There is no doubt that the most important concepts are encoded in language. It is often argued that the concepts central to the human psyche are reflected in the grammar of languages ​​and that it is grammatical categorization that creates that conceptual grid, that framework for the distribution of all conceptual material that is expressed lexically. Grammar reflects those concepts that are most significant for a given language. To form a conceptual system, it is necessary to assume the existence of some initial, or primary concepts, from which all others then develop. Concepts as interpreters of meanings are constantly amenable to further clarification and modification and represent unanalyzed entities only at the beginning of their appearance, but then, being part of the system, they come under the influence of other concepts and themselves are modified. (cf.: yellow And rapeseed yellow, vanilla yellow, maize yellow, lemon yellow etc.). The number of concepts and the scope of content of most of them are constantly changing. According to L.V. Barsalau (Germany), people are constantly learning new things in this world, and the world is constantly changing, so human knowledge must have a form that quickly adapts to these changes, and the main unit of transmission and storage of such knowledge - the concept - must also be quite flexible and mobile [Kubryakova, 1996a].

The theory of lexical semantics borrows a lot from logical-philosophical research and develops in close connection with them. Thus, the LD of a word is described as a complex structure determined by the general properties of the word as a sign: its semantics, pragmatics, syntactics. At the same time, LZ is a combination of the conceptual core (significative and denotative components of meaning) and pragmatic connotations. In speech, LZ can denote both the entire class of given objects (denotative series) and its individual representative (referent). Special cases are the LZ of deictics (pronouns, numerals) and relative words (conjunctions, prepositions).

The original understanding of the concept was proposed by V.V. Kolesov. In the article "The Concept of Culture: Image - Concept - Symbol" he gives the following diagram of the semantic development of the word of the national language.

Referent Denotat Yes P No R Yes D Logical "removal" of concept 2 Psychological representation of image 1 No D Cultural symbol 3 Pure mentality of the concept 4 0

Note.

Referent - P (P - subject: what does the meaning mean), denotation - D (D - subject meaning in the word: what does the meaning mean).

The numbers 0, 1, 2, 3, 4 indicate the corresponding stages in the development of words in the national language.

According to the author, “a concept is the starting point of the semantic content of a word (0) and at the same time the final limit of the development of a word (4), while a concept is the historical moment of removing an essential characteristic from the images accumulated by consciousness, which is immediately dumped into symbols, in turn, serving for connection, communication between the natural world (images) and the cultural world (concepts). The symbol as “ideological imagery”, as an image that has passed through the concept and focused on the typical signs of culture, as a sign of a sign is the focus of attention of Russian philosophical thought. For Traditionally, what is important are ends and beginnings, and not at all intermediate points of development, including the development of thought, the increment of meanings in a word, etc. What was the beginning as a result of the development of the meanings of a word as a sign of culture becomes its end - enrichment etymon to the concept of modern culture. The concept therefore becomes the reality of national speech thought, figuratively given in the word, because it exists in reality, just as there is a language, phoneme, morpheme and other “noumena” of content identified by science that are vital to any culture. A concept is something that is not subject to change in the semantics of a verbal sign, which, on the contrary, dictates to speakers of a given language, determining their choice, directs their thought, creating the potential possibilities of language-speech" (see also the works of [Radzievskaya, 1991; Frumkina, 1992; Likhachev, 1993; Lukin, 1993; Golikova, 1996; Lisitsyn, 1996; Babushkin, 1996; Cherdakova, 2000]).

3.2.3. PRAGMATIC ASPECT. Pragmatics analyzes the communicative function of language - emotional, psychological, aesthetic, economic and other practically significant relations of the native speaker to the language itself, and also explores the connections between signs and the people who create and perceive them. If we are talking about human language, then special attention is paid to the analysis of the so-called “egocentric” words: I, here, now, already, yet, etc. These words seem to be oriented towards the speaker and reflect him in space and on the “time axis” ". With these words, we seem to turn an objective fact in our direction, force us to look at it from our point of view (Cf.: No snow. - There is no more snow. - There's no snow yet). This approach is very important when modeling a communicative situation (see paragraph 7. Logical basis for modeling a language situation). Another problem of pragmatics is the “stratification” of the “I” of the speaker or writer in the flow of speech. Let's look at an example. A member of our group says: Ten years ago I was not a student. There are at least two “I”s: “I1” and “I2”. “I1” is the one who is saying this phrase now, “I2” is the one who was not a student in the past. Space and time are perceived subjectively and therefore are also objects of study of pragmatics. Particularly fertile ground for the study of “pragmatic phenomena” is represented by works of art: novels, essays, etc. In the field of formal logic, pragmatics plays almost no role, in contrast to such branches of semiotics as semantics and sigmatics. In linguistics, pragmatics is also understood as a field of study in which the functioning of linguistic signs in speech is studied [Arutyunova, 1990].

3.2.4. SIGMATHIC ASPECT. Sigmatic studies the relationship between the sign and the object of reflection. Linguistic signs are names, designations of objects of reflection. The latter are designata of linguistic signs. Semantics and sigmatics serve as a prerequisite for syntactics, all three serve as a prerequisite for pragmatics.

3.3. NATURAL LANGUAGES. DISADVANTAGES OF NATURAL LANGUAGES. Natural languages- These are sound (oral speech) and then graphic (writing) sign systems that have historically developed in society. Natural languages ​​are distinguished by their rich expressive capabilities and universal coverage of a wide variety of areas of life.

The main disadvantages of natural languages ​​are the following:

1) significant units of natural languages ​​gradually and almost imperceptibly change their meanings;

2) significant units of natural languages ​​are characterized by polysemy, synonymy, and homonymy;

3) the meaning of units of natural languages ​​is often vague and amorphous (for example, units of chromatic and expressive vocabulary);

4) finally, the grammatical rules used for constructing expressions of natural languages ​​in the logical sense are also imperfect. It is not always possible to determine whether a given sentence makes sense or not.

3.4. SCIENTIFIC LANGUAGES. Sciences are trying to eradicate these shortcomings in their fields. Scientific terminology is a stock of special words, a set of special expressions from the field of a given science, used by representatives of one scientific school. These words arise due to the fact that science is characterized by operating with rigid expressions and definitions that have developed as a result of strictly defined use. The words included in such expressions become terms.

In this way, it is possible to artificially prevent the meaning of words from changing over time, unless the further development of science requires it. However, terms with a strictly fixed meaning have strict boundaries of use. With the achievement of a new level of understanding of the phenomenon, old terms are filled with new content, in addition, new terms should arise.

You can avoid the use of synonyms by strictly limiting yourself to one of them. Scientific language is not a language in the literal sense, because it does not exist independently and independently of natural language. It arises from natural language and special terminology and differs from the latter in its vocabulary, and not in grammatical rules. The connection between natural languages ​​and scientific languages ​​is ongoing, since scientific languages ​​include all new words of natural language into their terminology. Insufficient attention to these words can lead to misunderstandings and even misdirection in the study. On the other hand, special terms from various sciences are constantly entering the vocabulary of natural language (determinologization).

3.5. ARTIFICIAL LANGUAGE. REQUIREMENTS FOR ARTIFICIAL LANGUAGES. DISADVANTAGES OF FORMALIZED LANGUAGES. Constructed languages- these are auxiliary sign systems, specially created on the basis of natural languages ​​for the accurate and economical transmission of scientific and other information. They are constructed not by their own means, but with the help of another, usually natural language or a previously constructed artificial language. An artificial formalized language must satisfy the following requirements:

All basic characters are presented explicitly (no ellipsis). Basic signs are simple, non-compound words of a language or simple, non-composite symbols (if we are talking about a symbolic language);

All definition rules are specified. These are the rules for introducing new, usually shorter signs using existing ones;

All rules for constructing formulas are specified. These are the rules for the formation of compound signs from simple ones, for example, the rules for the formation of sentences from words;

All transformation rules or inference rules are specified. They relate only to the graphic representation of the signs used (words, sentences, symbols);

All interpretation rules are specified. They provide information about how the meaning of complex signs (for example, words) is formed, and unambiguously determine the relationship between the signs of a language and their meanings.

The symbolic language of formal logic was created specifically to accurately and clearly reproduce the general structures of human thinking. Between the general structures of thinking and the structures of the linguistic expression of logic, there is, as they say, a one-to-one relationship, i.e., each mental structure exactly corresponds to a specific linguistic structure, and vice versa. This leads to the fact that within formal logic, operations with thoughts can be replaced by actions with signs. Thus, formal logic has a formalized language, or formalism. Formalized records are also used in linguistics, for example, in syntactic studies when describing structural patterns of sentences, etc., in onomasiological works when describing models of metaphorization, etc.

A significant disadvantage of formalized languages ​​compared to other languages ​​is that they are not expressive. The totality of all currently available formalized languages ​​can reproduce only relatively small fragments of reality. It is difficult to predict for which areas of science formalized languages ​​can be created and for which they cannot. Empirical research, of course, cannot be replaced by it. The set of scientific languages ​​will never be a set of formalized languages.

3.6. METALANGUAGE. A language that acts as a means of constructing or learning another language is called metalanguage, and the language being studied is language-object. In this case, the metalanguage should have richer expressive capabilities compared to the object language.

Metalanguage has the following properties:

With the help of its linguistic means, one can express everything that is expressible by means of an object language;

With its help, you can designate all the signs, expressions, etc. of the object language; there are names for all of them;

In a metalanguage we can talk about the properties of an object language expression and the relationships between them;

It can be used to formulate definitions, notations, formation and transformation rules for object language expressions.

A metalanguage in which the units of a conceptual system are specified (i.e., an ordered set of all concepts reflecting human knowledge and experience) and correspondences for natural language expressions are described, is defined by the term mental language. One of the first attempts to create a mental language was Leibniz's logical-philosophical metalanguage. Currently, mental language as a metalanguage of linguistic description is being especially actively developed by an Australian researcher Anna Vezhbitskaya.

3.7. LANGUAGE OF PREDICATE LOGIC. Artificial languages ​​of varying degrees of rigor are widely used in modern science and technology: chemistry, mathematics, theoretical physics, etc. Artificial formalized language is also used by logical science for the theoretical analysis of mental structures.

The so-called language of predicate logic is generally accepted in modern logic. Let us briefly consider the principles of construction and structure of this artificial language.

The semantic or semantic characteristics of linguistic expressions are important for identifying the logical form of thoughts when analyzing natural language. Its main semantic categories are: names of predicates, names of properties, sentences.

3.7.1. PREDICATE NAMES. Predicate names are individual words or phrases that denote objects. Names, acting as conditional representatives of objects in language, have a double meaning. The set of objects to which a given name refers constitutes its objective meaning and is called denotation. The method by which such a set of objects is distinguished by indicating their inherent properties constitutes its semantic meaning and is called concept, or meaning. According to their composition they distinguish simple names, which do not include other names ("linguistics"), and complex, including other names ("the science of language"). According to denotation, names are single And are common. A singular name denotes one object and can be represented in the language by a proper name (“Ulashin”) or given descriptively (“the Polish researcher who first used the term “morphoneme””). The common name denotes a set consisting of more than one thing; in a language it can be represented by a common noun (“case”) or given descriptively (“a grammatical category of a name expressing its syntactic relationship to other words of the statement or to the statement as a whole”). The aesthetic perception of the names of predicates used in texts led to the creation of special didactic works on the theory of rhetoric, which described “rhetorical figures.” It is no coincidence that the authors of the first rhetoric were also the creators of logic as a science (Aristotle and others). The logical opposition of names of simple, complex, etc. in theories of rhetoric, and subsequently stylistics, speech culture, has sharpened research interest in the universal classification of semantic and syntactic figures of speech.

3.7.2. PROPERTIES NAMES. Language expressions denoting properties and relationships - names of properties and relationships - are called predicators. In sentences, they usually serve as a predicate (for example, “to be blue,” “to run,” “to give,” “to love,” etc.). The number of names to which a given predicator applies is called its terrain. Predicators that express properties inherent in individual objects are called single(for example, “The sky is blue”, “The student is talented”). Predicators that express relationships between two or more objects are called multi-seat. For example, the predicator “to love” refers to two-places (“Mary loves Peter”), and the predicator “to give” refers to three-places (“A father gives a book to his son”).

Further study of the names of properties - predicators - led to the creation of modern syntactic science with all the variety of approaches to describing the linguistic material within it.

3.7.3. OFFERS. Offers- these are expressions of language through which something about the phenomena of reality is affirmed or denied. Declarative sentences, by their logical meaning, express truth or falsehood.

3.7.4. ALPHABET OF THE LANGUAGE OF PREDICATE LOGIC. This alphabet reflects the semantic categories of natural language and includes the following types of signs (symbols):

1) a, b, c, … - symbols for single names of objects; they are called subject constants (constants);

2) x, y, z, ... - symbols of common names of objects; they are called subject variables;

3) P1, Q1, R1, ...; P2, Q2, R2, ...; Pn, Qn, Rn - symbols for predicators, the indices of which express their location: 1 - single, 2 - double, n - n-seater. They are called predicate variables;

4) p, q, r - symbols for statements that are called expressive, or propositional variables(from lat. propositio- "statement");

5) ", $ are symbols for quantifiers, " is a general quantifier, it symbolizes the expressions: all, every, every, always, etc. $ is an existential quantifier, it symbolizes the expressions: some, sometimes, happens, occurs, exists, etc.;

6) logical connectives:

L - conjunction (connective "and");

V - disjunction (dividing "or");

® - implication ("if..., then...");

є - equivalence (if and only if..., then...");

Ш - negation ("it is not true that...");

7) technical signs: (;) - left and right brackets.

The alphabet of the language of predicate logic does not include any other signs other than those listed.

For letter designations of types of judgments, vowels are taken from the Latin words AffIrmo - “I affirm” and nEgO - “I deny”; the judgments themselves are sometimes written like this: SaP, SiP, SeP, SoP.

Using the given artificial language, a formalized logical system called predicate calculus. A systematic presentation of predicate logic is given in textbooks on symbolic logic. Elements of the language of predicate logic are used in the presentation of individual fragments of natural language.

4. CONCEPT

4.1. GENERAL CHARACTERISTICS OF THE CONCEPT. SIGNIFICANT AND INNESSANT FEATURES OF THE CONCEPT. The characteristic of an object is that in which objects are similar to each other or in which they differ from each other. Any properties, features, states of an object that in one way or another characterize, distinguish it, help to recognize it among other objects, constitute its characteristics. Signs can be not only properties belonging to an object; an absent property (trait, state) is also considered as its sign. Any object has many different characteristics. Some of them characterize a separate subject, are single, others belong to a certain group of objects and are general. Thus, each person has characteristics, some of which (facial expressions, facial features, gait, etc.) belong only to this person; others (profession, nationality, social affiliation) are common to a certain group of people; Finally, there are signs common to all people. In addition to single (individual) and general characteristics, logic distinguishes essential and non-essential characteristics. Signs that necessarily belong to an object, express its internal nature, its essence, are called significant. Features that may or may not belong to an object and that do not express its essence are called insignificant.

Essential features are crucial for the formation of concepts. The concept reflects objects in essential characteristics, which can be both general and individual. For example, a common essential feature of a person is the ability to create tools. A concept that reflects one subject (for example, “Aristotle”), along with general essential features (a person, an ancient Greek philosopher), includes individual essential features (the founder of logic, the author of the Organon), without which it is impossible to distinguish Aristotle from other people and philosophers of Ancient Greece. Reflecting objects in essential features, the concept is qualitatively different from the forms of sensory knowledge: perceptions and ideas that exist in the human mind in the form of visual images of individual objects. The concept is devoid of clarity; it is the result of a generalization of many homogeneous objects based on their essential features.

So, a concept is a form of thinking that reflects objects in their essential characteristics.

4.2. LOGICAL TECHNIQUES FOR FORMATION OF CONCEPTS. To form concepts, it is necessary to identify the essential features of the subject. But the essential does not lie on the surface. To reveal it, you need to compare objects with each other, establish what is common to them, separate them from the individual, etc. This is achieved using logical techniques: comparison, analysis, synthesis, abstraction and generalization.

4.2.1. COMPARISON. A logical technique that establishes the similarity or difference between objects of reality is called comparison. By comparing a number of objects, we establish that they have some common features inherent in a separate group of objects.

4.2.2. ANALYSIS. To highlight the characteristics of an object, you need to mentally dissect objects into their component parts, elements, sides. The mental breakdown of an object into its component parts is called analysis. Having identified certain signs, we can study each of them separately.

4.2.3. SYNTHESIS. Having studied individual details, it is necessary to restore the subject as a whole in thinking. The mental connection of parts of an object dissected by analysis is called synthesis. Synthesis is the opposite of analysis. At the same time, both methods presuppose and complement each other.

4.2.4. ABSTRACTING. Having identified the characteristics of an object using analysis, we find out that some of these characteristics are of significant importance, while others do not have such significance. By focusing our attention on the essential, we abstract from the unimportant. The mental isolation of individual features of an object and distraction from other features is called abstraction. To consider any feature abstractly means to distract (abstract) from other features.

4.2.5. GENERALIZATION. We can extend the characteristics of the objects being studied to all similar objects. This operation is carried out by generalization, i.e., a technique by which individual objects, on the basis of their inherent identical properties, are combined into groups of homogeneous objects. Thanks to generalization, the essential features identified in individual objects are considered as signs of all objects to which this concept is applicable.

Thus, by establishing similarities or differences between objects (comparison), highlighting essential features and abstracting from non-essential ones (abstraction), connecting essential features (synthesis) and extending them to all homogeneous objects (generalization), we form one of the main forms of abstract thinking - concept.

The idea of ​​logical opposition of essential and non-essential features in linguistics was embodied, on the one hand, in the idea of ​​​​contrasting integral (invariant) and differential features of linguistic units, and on the other hand, in the idea of ​​their relevant and irrelevant features (cf.: relevant phonetic feature - feature, which is significant in contrasting a given sound with another sound: for example, the sign “voice” is relevant in contrasting a voiced consonant with a deaf one, the sign “hardness” is relevant in contrasting a hard consonant with a soft one, etc.; an irrelevant phonetic sign is a sign that is not involved in contrasting a given sound with another or other sounds, for example, the sign “degree of openness of the oral cavity” is not important for contrasting consonant sounds [Lukyanova, 1999]).

4.3. CONCEPT AND LANGUAGE SIGN. As he writes Vladimir Mikhailovich Alpatov, the significance of a word is determined not by linguistic, but by psycholinguistic reasons. Indeed, in the process of speaking, a person builds a certain text according to certain rules from certain initial “bricks” and “blocks”, and in the process of listening, he divides the perceived text into “bricks” and “blocks”, comparing them with standards stored in his brain. Such stored units can be neither too short (then the generation process would be too complicated) nor too long (then the memory would be overloaded); some optimum must be achieved. It is difficult to imagine the storage of phonemes or sentences in the brain as a norm (although individual sentences like proverbs or sayings, and even entire texts like prayers can be stored). It can be assumed that the norm should be some units of average length, and the analysis of linguistic traditions leads to the hypothesis that such units can be words. At the same time, there is no reason to believe that for a speaker of any language these units should be exactly the same in properties; these properties may vary depending on the structure of the language, as linguistic research shows. The speculative assumptions expressed above are confirmed by the results of a study of speech disorders - aphasia and data from the study of child speech. These data indicate that the human speech mechanism consists of separate blocks; with aphasia associated with damage to certain areas of the brain, some blocks are preserved, while others fail, and when a child develops speech, blocks begin to operate at different times. It turns out, in particular, that some areas of the brain are responsible for storing ready-made units, while others are responsible for constructing other units from them and for generating statements [Alpatov, 1999].

Language is strictly ordered; everything in it is systematic and subject to laws predetermined by human consciousness. Apparently language has a common unified principle of its organization, to which all its functional and systemic features are subordinated, and the latter only manifest themselves differently in certain links of its structure. Moreover, this general principle should be extremely simple- otherwise this complex mechanism would not be able to function. We are amazed at the complexity of language and think about what kind of abilities and memory one must have in order to master a language and use it, and yet even those who cannot write or read (and there are over a billion illiterates on the globe) successfully communicate in their language, although their vocabulary may be limited [Stehling, 1996].

Virtually all research on language modeling is, in one way or another, focused on the search for this “simple” principle.

Thus, the concept is inextricably linked with a linguistic sign, most often with a word. Words are a kind of material basis for concepts, without which neither their formation nor operation with them is possible. However, as we have already noted, the unity of language and thinking, words and concepts does not mean their identity. Unlike concepts, the units of all languages ​​are different: the same concept is expressed differently in different languages. In addition, in one language, as a rule, there is also no identity of concept and word. For example, in any language there are synonyms, language variants, homonyms, and polysemantics.

The existence of synonymy, homonymy, and polysemy at the morphemic, lexical, morphological, and syntactic levels often leads to confusion of concepts and, consequently, to errors in reasoning. Therefore, it is necessary to accurately establish the meanings of specific linguistic units in order to use them in a strictly defined sense.

4.3.1. SYSTEM OF CONCEPTS AND LANGUAGE SYSTEM. The lexical composition of any language and its grammatical system are not a mirror image of the system of concepts used in the human society speaking that language. Speakers of different languages ​​divide objective reality in different ways, accordingly reflecting in the language different aspects of the described object. If an object is a bearer of characteristics a, b, c, d, etc., then there may be nominations that fix these characteristics in different variations: a + b or a + c, or a + b + d, etc. ( this, for example, is reflected in the internal form of equivalent words from different languages, compare the internal form of Russian. tailor from ports"clothes", German. Schneider from schneiden"to cut", Bulgarian shivach from shiya"sew"; in units of chromatic, somatic vocabulary, etc.).

Here we can point out very interesting results obtained at the end of the 19th - beginning of the 20th centuries. researchers of the direction called “words and things” (Worter und Sachen), primarily Hugo Schuchardt(1842 - 1927), according to whom, the development of the meaning of a word always had an internal motivation, explained by the relevance of the conditions in which certain meanings of the word were born and consolidated. Schuchardt believed that etymology reaches its highest level when it becomes a science not only about words, but also about the realities hidden behind them; A truly scientific etymological study must be broadly based on a comprehensive study of realities in their historical and cultural context. Therefore, the history of the word is inconceivable without the history of the people, and etymological research acquires paramount importance in solving important historical and ethnogenetic problems [Kolshansky, 1976]. All this leads to the fact that national dictionaries are extremely different from each other, and national language systems of synonyms, variants, antonyms, polysemantics and, especially, homonyms exhibit vivid individualism. This is why conceptual systems are generally universal in human experience, but linguistic systems are deeply original.

The grammatical system of a language is designed to reflect objectively existing relationships between extralinguistic elements. If we consider extralinguistic reality as a huge open system, then the variety of relationships between its components will be colossal, but even languages ​​with rich morphology and complex syntax have a limited set of rules. This means that some types of relationships between elements of objective reality are necessarily fixed by the grammatical system (sometimes repeatedly, cf. grammatical pleonasm in I say, you say), even if this information is redundant for the speaker and listener (cf. normative for English speakers, but excessive, from the point of view of Russian speakers, use of possessive pronouns in non-emphatic constructions: I hurt my leg- lit. I broke my (my) leg vm. I broke my leg), while other types of relations are ignored and information about them is expressed by communicants not using special grammatical means, but using lexical ones. So in Russian in statements I walked yesterday from 8 to 9 o'clock, I walked every day, I walked in this park every morning since I arrived in this city one tense form is used (I walked with different meanings, which are updated thanks to the context, lexical and other specifiers, and in English, to convey the same content, different tense forms are necessarily used, which do not convey information about the gender of the speaker, which is obligatory , whether the speaker wants it or not, is present in Russian phrases. Languages ​​differ not in that you can talk about something in one language, but not in another: it has long been known that any thought can be expressed in any language. The situation is different: languages ​​differ from each other in the information that, when speaking in each of them, one cannot fail to communicate - in other words, in what must be communicated in these languages o i z a t e l n o (cf.: The doctor comes daily; The doctor has come- we cannot convey information without reporting gender and number; the English analogue does not convey this information) [Plungyan, 1996].

“Just as physiology shows how life is elevated to the level of an organism and in what relationships it is represented, so grammar explains how the innate ability of a person to express itself in articulate sounds and in the word formed from them develops. The study of this manifestation in man in general is the subject of general grammar; the study of the peculiarities of the gift of speech in one particular nation is the subject of particular grammar. The first serves as the basis for the second; therefore, the grammar of the Russian language as a science is only possible as a general comparative one" [Davydov, 1852].

From birth, a person is fluent in at least one language, and there is no need to teach him this - you just need to give the child the opportunity to hear, and he will speak on his own. An adult can also learn a foreign language, but he will do it worse than a child. It is easy to distinguish a foreigner who speaks Russian from a person for whom Russian is his native language. We don’t remember and don’t know the Russian language; we can only remember and know a non-native language. All cases of aphasia and other speech disorders have a physiological cause - destruction or blocking of speech centers. A person may forget his name, but he will not forget how to express it: we may forget a word and suddenly remember it, but we will never forget, for example, the instrumental case, the subjunctive mood or the future tense - language is part of us. In other words, we all know how to speak our language, but we cannot explain how we do it. Therefore, foreigners puzzle us with the simplest questions: why do Russians birds"sitting on the wires" when they " are worth", A dishes, against, " are on the table", but not " lie"as it happens with spoons? What's the difference between the words Now And Now, phrases Every day I walk past this tree And Every day I walk past this tree and questions Have you seen this movie? And Have you watched this movie? It will be difficult for a non-philologist to explain why we say this; the philologist’s answer is about free and bound combinations, about lexical valency, grammatical categories, etc. will not reveal the mechanism.

It is believed that every person has a grammar of his native language “in his head” - part of the mental-lingual complex (which includes mental language) - a mechanism that allows us to speak correctly. But grammar is not an organ, and no one yet knows what natural grammar actually is. Each language has its own grammar, which is why it is so difficult for us to learn a foreign language; we need to remember a lot of words and understand the laws by which these words are formed and connected. These laws are not similar to those that operate in our native language, and therefore there is such a thing as language interference, leading to the generation of numerous errors in speech. For grammarians, such errors are a treasure trove of information, because the structural, grammatical and semantic features of the speaker’s native language “overlap” his knowledge of the non-native language and reveal the most interesting phonetic and grammatical features of the native and target languages. To better understand the grammar of the Russian language, you need to compare its facts with the facts of the grammars of languages ​​of other systems. The task of a linguist is to “pull out” grammar, make it explicit, identify linguistic units and describe their system. At the same time, we must remember that the grammars of all languages ​​also have common, universal features. It was noted long ago that “there are some laws common to all languages, based not on the will of peoples, but on essential and unchangeable qualities of the human word, which ... serve to ensure that people of different centuries and countries can understand one another and that the natural our language serves as a necessary way to learn any foreign language" [Rizhsky, 1806]. Thus, the linguistic universals inherent in the grammars of all languages ​​or most of them include the following properties: expression of the relationship between the subject and the predicate, signs of possessivity, evaluation, definiteness/indeterminacy, plurality, etc. If there is inflection in a language, then there is a derivational element ; if the plural is expressed, then there is a non-zero morph expressing it; if there is a case with only a zero allomorph, then for each such case there is the meaning of the subject with an intransitive verb; if in a language both subject and object can appear before a verb, then that language has case; if the subject comes after the verb and the object comes after the subject, then the clause expressed by the adjective is placed after the clause expressed by the noun; if there is a preposition in the language and there is no postposition, then the noun in the genitive case is placed after the noun in the nominative case, etc. [Nikolaeva, 1990].

There is also the problem of the relationship between the universal and the national-specific in the linguistic representation of the world.

The universal properties of the picture (model) of the world are due to the fact that any language reflects in structure and semantics the basic parameters of the world (time and space), a person’s perception of reality, non-normative assessment, a person’s position in living space, the spiritual content of the individual, etc. National specificity is already evident in how, to what extent and proportions the fundamental categories of being are represented in languages ​​(individual and particular, part and whole, form and content, appearance and essence, time and space, quantity and quality, nature and man, life and death, etc. .). The Russian language, for example, gives preference to the spatial aspect of the world compared to the temporal one. The local principle of modeling a variety of situations is becoming widespread in it. Existential sentences containing messages about the world are based on the idea of ​​spatial localization ( There is no happiness in the world, but there is peace and will, Pushkin), a fragment of the world ( NSU has a humanities faculty), personal sphere ( I have friends and enemies), physical states and properties ( I have headaches), psyche ( The boy has character), characteristics of objects ( The chair has no legs), specific events ( I had a birthday), abstract concepts ( There are contradictions in the theory) etc. The existential type originates in the expression of quantitative, as well as some qualitative values ​​( We have a lot of books; The girl has beautiful eyes). The principle of modeling the personal sphere distinguishes “languages ​​of being” (be-languages) from “languages ​​of possession” (have-languages); compare: The boy has friends and English The boy has friends; You have no heart and English You have no heart; I have a meeting today and English I have a meeting today. In existential constructions the name of the person does not occupy the position of the subject, but in constructions with to have becomes him.

The existential basis of the Russian language determines a number of its features. Firstly, the prevalence of local means of determining a name (cf. The girl has blue eyes And The girl's eyes are blue). Secondly, there is a greater development of inter-subject than inter-event (temporal) relations (cf. the paradigms of names and verbs). Thirdly, the active use of local prepositions, etymologically similar prefixes, adverbs, case forms of nouns, etc. to express temporary and other meanings (cf.: before angle And before lunch; come in behind corner And stay too long behind midnight; somewhere about two hours, He somewhere interesting person; A here suddenly something strange happened). It should also be noted the development and subtle differentiation of the category of indeterminacy, characteristic of existential structures (there are more than 60 indefinite pronouns in the Russian language), and the tendency to displace names of persons in the nominative case from the position of the subject and to form the subject with indirect cases (cf.: He's sad And He's sad), the representation of a person as a space (locum) in which mental processes and events take place ( Anger seethed inside him; Love was ripening in her). In addition, important components of a nationally specific picture of the world are the so-called key concepts of culture. In Russian, these include, in particular, the concepts of the spiritual sphere, moral assessments, judgment, spontaneous (spontaneous) states of a person. Associated with them are such fundamental words for the Russian language as soul, Truth, justice, conscience, fate (share, destiny, fate), yearning etc. The frequency of their use in Russian is significantly higher than the corresponding words in other languages, for example, in English. For 1 million uses of words, word forms of lexemes fate occurs 181 times, and English. fate - 33, destiny- 22 [Arutyunova, 1997].

With all the diversity of lexical and grammatical meanings in specific languages, at the same time, their amazing repetition is revealed. Languages ​​seem to rediscover the same elements of meaning, giving them a different design, which allows us to speak, when applied to different languages, about certain fixed semantic blocks of the universe of meanings (ultimately predetermined by the properties of what is reflected in a person’s thinking and independently of it). the existing world of objects, events, relationships, etc.): about parts of speech, nominal classes, number values, referential correlation, about causative connections between pairs of events, about typical roles of participants in a communicative situation, about ways of implementing a typical event, about time values , causes, conditions, consequences, etc. The universe of meanings is divided in a certain way by each language into standard, typical semantic blocks for this language. Each semantic block is internally complexly organized, i.e., a decomposable semantic object. Semantic blocks to which relatively integral and independent signifiers correspond, as we have already noted, are called lexical meanings, and semantic blocks whose signifiers lack integrity and/or independence are called grammatical meanings (in the broad sense of the word, their exponents can be service morphemes, special syntactic structures - phrases and sentences, etc.) [Kibrik, 1987].

Numerous groups of words stored in the memory of a native speaker and forming his personal vocabulary are designated by the term thesaurus. The personality dictionary of the average native speaker is 10 - 100 thousand words. Experiments show that vocabulary is stored in memory in ordered structures. These ordered structures are much more complex than a one-dimensional structure, for example, an alphabetical list - to extract the desired word from this list, you need to go through all the elements of the list sequentially, but the thesaurus is organized and ordered in a surprisingly expedient way. Thus, asking a native speaker to remember all the elements of a set causes difficulty, but as soon as you enter any identifiers, a guess immediately arises, thus, the multidimensionality of such an information store (personal dictionary) allows you to retrieve the desired word without going through all the options, using to find it different access keys (usually using associates). Each word received in a message activates in the memory of the listener a certain group of words semantically (or in some other way) related to this word.

4.4. CONTENT AND SCOPE OF THE CONCEPT. Any concept has content and scope. Content of the concept is the totality of essential features of an object, which is thought of in a given concept. For example, the content of the concept “case” is a set of essential features of case: grammatical category, expression of relations, etc. The set of objects that is thought of in the concept is called scope of the concept. The scope of the concept “case” covers all cases, since they have common essential features. The content and scope of the concept are closely related to each other. This connection is expressed in the law of the inverse relationship between the volume and content of a concept, which establishes that an increase in the content of a concept leads to the formation of a concept with a smaller volume, and vice versa. Thus, increasing the content of the concept “meaning” by adding a new feature “lexical”, we move on to the concept “lexical meaning”, which has a smaller scope. The law of the inverse relationship between the volume and content of a concept underlies a number of logical operations that will be discussed below.

4.5. CLASS. SUBCLASS. CLASS ELEMENT. Logic also operates with the concepts of “class” (“set”), “subclass” (“subset of a set”) and “class element”. Class, or many, is a certain collection of objects that have some common characteristics. These are, for example, classes (sets) of faculties, students, language units, etc. Based on the study of a certain class of objects, the concept of this class is formed. Thus, based on the study of a class (set) of linguistic units, the concept of a linguistic unit is formed. A class (set) may include a subclass, or subset. For example, the class of students includes a subclass of humanities students, the class of faculties includes a subclass of humanities faculties. The relationship between a class (set) and a subclass (subset) is expressed using the "=" sign: A = B. This expression is read as follows: A is a subclass of B. So, if A are humanities students, and B are students, then A will be a subclass of class B. Classes (sets) consist of elements. Class element- This is an item included in this class. Thus, elements of many faculties will be the Faculty of Natural Sciences, the Faculty of Humanities, the Faculty of Mechanics and Mathematics and other faculties. There is a universal class, a unit class, and a null or empty class. The class consisting of all elements of the study area is called universal class(for example, the class of planets of the solar system, the class of Russian phonemes). If a class consists of one single element, then it will be unit class(for example, the planet Jupiter, consonant [B]); finally, a class that does not contain a single element is called zero (empty) class. An empty class is, for example, the class of Russian articles. The number of elements of an empty class is zero. Establishing the boundaries of a natural class of objects, i.e., resolving the question of its identity, is possible as a result of empirical or theoretical research. This is a difficult task, since the elements of extra-linguistic reality are closely interconnected and the researcher may have difficulties when classifying them. An equally difficult task is determining the identity of a linguistic unit: almost all classification problems in descriptive linguistics are associated with the possible ambiguity of resolving the issue of the boundaries of a language class.

4.6. TYPES OF CONCEPTS. Traditionally, concepts are usually divided into the following types: (1) individual and general, (2) concrete and abstract, (3) positive and negative, (4) non-relative and correlative.

4.6.1. SINGLE AND GENERAL CONCEPTS. Concepts are divided into individual and general, depending on whether they represent one element or many elements. A concept in which one element is thought of is called single(for example, "Novosibirsk", "Novosibirsk State University"). The concept in which many elements are thought of is called general(for example, "city", "university"). They contain many elements that have common essential features.

Single in philosophical science it denotes the relative isolation, discreteness, delimitation of things and events from each other in space and time, as well as their inherent specific, unique features that make up their unique qualitative and quantitative certainty. Not only a separate object, but also an entire class of objects can be considered as a single object, if it is taken as something single, relatively independent, existing within the boundaries of a certain measure. At the same time, the object itself is a number of parts, which, in turn, act as individuals. General expresses a certain property or relationship characteristic of a given class of objects, events, as well as the law of existence and development of all individual forms of existence of material and spiritual phenomena. As the similarity of the characteristics of things, the general is accessible to direct perception; being a pattern, it is reflected in the form of concepts and theories. In the world there are neither two absolutely identical, nor two absolutely different things that have nothing in common with each other. The general as a pattern is expressed in the individual and through the individual, and any new pattern initially appears in the form of a single exception to the general rule [Philosophical Encyclopedic Dictionary, 1983].

The possibility of dividing concepts into general and individual ones turned out to be extremely fruitful, firstly, for Saussurean linguistics as a whole with its methodological dichotomy “speech - language” (speech is specific speaking, occurring over time and expressed in sound or written form, while language includes themselves are abstract analogues of units of speech and are a system of objectively existing, socially fixed signs that correlate conceptual content and typical sound; at the same time, speech and language form a single phenomenon of human language and each specific language, taken in a certain state), secondly , for an idea models in linguistics in all the diversity of its interpretation; thirdly, to classify concepts into individual and general, concrete and abstract, positive and negative, irrespective and relative - this idea was extrapolated to the linguistic material itself (see, for example, the lexical-grammatical classification of nouns).

General concepts can be registering and non-registering. Registrants concepts in which the multitude of elements conceivable in them can be taken into account and registered (at least in principle) are called. For example, “ending of the genitive case”, “district of Novosibirsk”, “planet of the Solar System”. Registering concepts have a finite scope. A general concept relating to an indefinite number of objects is called non-registering. For example, the concepts “number”, “word”. Non-registering concepts have an infinite scope. A special group is allocated collective concepts, in which the signs of a set of elements that make up a single whole are thought of, for example, “collective”, “group”, “constellation”. These concepts, as well as general ones, reflect a multitude of elements (team members, group students, stars), however, as in individual concepts, this multitude is thought of as a single whole. The content of a collective concept cannot be attributed to each individual element included in its scope; it refers to the entire set of elements. In the process of reasoning, general concepts can be used in a divisive and collective sense. If the statement refers to each element of the class, then such a use of the concept will be divisive, but if the statement refers to all elements taken as a unity, and is not applicable to each element separately, then such a use of the concept is collective. Speaking Students in our group study logic, we use the concept “students of our group” in a divisive sense, since this statement applies to each student of our group. In a statement Students from our group held a conference The statement applies to all students in our group as a whole. Here the concept of “students of our group” is used in a collective sense. Word every inapplicable to this judgment - cannot be said Each student in our group held a conference.

4.6.2. CONCRETE AND ABSTRACT CONCEPTS. Concepts are divided into concrete and abstract depending on what they reflect: an object (a class of objects) or its property (the relationship between objects). The concept in which an object or a set of objects is thought of as something independently existing is called specific; the concept in which the property of an object or the relationship between objects is thought of is called abstract. Thus, the concepts of “book”, “witness”, “state” are concrete, the concepts of “whiteness”, “courage”, “responsibility” are abstract. Since ancient times, there has been a debate about the reality of the existence of concrete and abstract concepts between nominalists And realists. Nominalism denies the ontological (existential) meaning of universals (general concepts). Nominalists believe that universals do not exist in reality, but only in thought. So the Cynic Antisthenes and the Stoics criticized Plato’s theory of ideas: ideas, they believed, have no real existence and are found only in the mind. In linguistics, this dispute was indirectly reflected in the choice of a single criterion for classifying nouns according to their lexico-grammatical categories.

4.6.3. POSITIVE AND NEGATIVE CONCEPTS. Concepts are divided into positive and negative depending on whether their content consists of properties inherent in the object or properties absent from it. Concepts whose content consists of properties inherent in an object are called positive. Concepts whose content indicates the absence of certain properties in an object are called negative. Thus, the concepts “literate”, “order”, “believer” are positive; the concepts of “illiterate”, “disorder”, “non-believer” are negative. One should not confuse the logical characterization of the concepts of positive and negative with the political, moral, and legal assessment of the phenomena that they reflect. Thus, “crime” is a positive concept, and “selflessness” is a negative one. In Russian, negative concepts are expressed by words with negative prefixes Not-, without-, A-, de-, in- and etc.

the predominance of a functional (substantive) approach to the identification, definition and systematization of language categories;

During the period of dominance of the philosophical doctrine of rationalism (17th - first half of the 19th centuries), the idea of ​​universal (“universal”) grammars was revived, based on the belief in the absolute correspondence of speech to the natural logic of thinking. S. S. Dumarce wrote that “in all languages ​​of the world there is only one necessary way of forming meaning with the help of words.” In 1660, in the Port-Royal monastery, the learned monks A. Arnaud and C. Lanslot created the so-called “Grammar of Port-Royal” ( "Grammaire générale et raisonnée de Port-Royal"), which became an example of this kind of writing (see Universal grammars). These grammars were given primarily logical and philosophical significance (philosophers J. Locke, D. Diderot, Dumarce, G. W. Leibniz and others participated in the development of problems related to language). The categories of language were interpreted as corresponding to certain operations of the mind: its ability to imagine, judge and infer. The division of grammar sometimes received epistemological interpretation. Thus, K. S. Aksakov divided grammar into 3 parts: part I - the name, it reflects the awareness of objects, being at rest; Part II is a verb, it reflects the awareness of action, being in motion; Part III - speech (i.e. syntax), it reflects the awareness of life in its integrity. General grammars were usually not consistently logical, for example in the description of formation. This was reflected in the experience of linguistic research itself, begun by Roman scientists (Priscian, Aelius Donatus and others). However, a universal model was taken as a basis, composed of grammatical categories identified in Latin. The influence of logical thought (in the version of Aristotelian formal logic) was great in the interpretation of the categories of syntax. In I. I. Davydov’s definition, syntax “explores either the logical relations of concepts and their expression, or the logical relations of thoughts and their expression.” The definitions of word classes did not indicate their formal characteristics, but their ability to perform some syntactic function. Thus, nouns were defined as “subject words”; words adapted to perform the function of a predicate were allocated to a special group (L. G. Yakob). The sentences were analyzed according to the judgment model (S is P).

Already within the logical direction of the 19th century. the possibility of a discrepancy between the categories of logic and the categories of grammar was pointed out, making the description of specific languages ​​according to the logical model inadequate, and attempts were also made to modify logical principles, removing their contradiction with linguistic data. F.I. Buslaev refused to highlight the copula as an obligatory component of the sentence structure. At the same time, he introduced into syntactic analysis secondary members of the sentence - additions and circumstances that have no analogues in the composition of the judgment. A consistent revision of the logical foundations of grammar was begun by the psychological movement of the 2nd half of the 19th century. His subject was K. F. Becker’s “The Organism of Language,” which was popular in European linguistics (cf. its criticism by H. Steinthal and A. A. Potebny).

Criticism of the logical principles of analysis, made from different (formal-grammatical, psychological, typological, etc.) positions, was based on the following provisions:

not all categories of logic have a linguistic correspondence (languages ​​do not reflect gender-species relations that are important for logic, the difference between true and false statements, etc.);

not all forms of language have logical content (for example, not all sentences express a judgment);

the number of logical and grammatical members of the sentence does not coincide, as a result of which the volume of the logical and grammatical subject and predicate is different (logically the sentence is divided into subject and predicate, but grammar distinguishes as part of the group of the subject definition, and as part of the group of the predicate - additions and circumstances);

logical and grammatical characteristics of sentence members can not only diverge, but also be inverted; the predicate can receive the function of a logical subject, and the subject - a predicate (see Actual division of a sentence);

analysis of sentences based on a single logical model does not allow us to describe real syntactic structures in all their diversity (especially non-Indo-European languages), obscuring the typological differences that exist between different languages ​​and the individual characteristics of specific languages;

logistic descriptions leave the psychological (emotional, evaluative, volitional) and communicative aspects of speech unidentified;

logic cannot provide a reliable principle for classifying linguistic forms.

Criticism of the logical foundations of grammar led to a clearer delimitation of linguistic categories proper from the categories of logic, which developed the technique of formal grammatical analysis and brought morphology to the fore. Interest in holistic, complete units of speech (sentence, period) was replaced by attention to minimal units of language (morpheme, differential features, seme). Logical principles and methods of analysis gave way to psychological, formal-grammatical, and structural ones.

At the end of the 19th and beginning of the 20th centuries. In a number of logical and philosophical schools (mainly within the framework of neopositivism and empiricism), the study of the logical aspect of natural languages ​​began. Representatives of analytical philosophy, or philosophy of analysis (G. Frege, B. Russell, L. Wittgenstein, R. Carnap, H. Reichenbach and others), undertook a logical analysis of the language of science in order to determine the boundaries of true knowledge. Based on the principle of “distrust of language” as a way of expressing thought and knowledge, representatives of this school resorted to universal symbolic notation to discover the true logical structure of a sentence. The most widely used representation of a sentence is as a propositional function (see Proposition), corresponding to a predicate, from a certain number of arguments corresponding to the nominal components of the sentence. The logical language included a set of constants: logical connectives (∧ - conjunction, “and”; ∨ - disjunction, “or”; → or ⊃ - implication, “if..., then..."; ≡ or ∼ equivalence, etc. ), operators, including quantifiers, indication of their scope, etc.

The use of an artificial language of logic has revealed the ambiguity of many sentences in natural languages. In the 60-80s. 20th century the problem of ambiguity began to be widely discussed in linguistics.

The philosophy of analysis developed a number of problems of logical semantics, the main concepts of which were the concept of significate (intension, meaning) and the concept of denotation (extension, referent). In connection with the concept of significat - the actual linguistic, virtual meaning of words and expressions - problems such as synonymy (identity of meaning), significance (or presence of meaning), analyticity of sentences (truth by virtue of meaning, for example, in tautological statements), the role of meaning were discussed subjective expression in the formation of the meaning of a sentence, etc. In connection with the concept of denotation and denotation, problems of the nature of naming, types of reference and its mechanisms were studied. The concept of descriptions introduced by Russell - common nouns and nominal expressions that acquire the ability to refer only in the context of a sentence - became important for logical semantics. Descriptions were contrasted by Russell with logical proper names, which retain their relevance to the object they name even outside the context of speech. In analytical philosophy, the beginning was made of the development of types of contexts (W. O. Quine) - intensional, created by verbs of thinking, opinion, knowledge, modal expressions, and extensional, independent of the subjective mode.

Studying primarily the language of science, analytical philosophy did not take into account the communicative aspect of speech, the pragmatic conditions of communication (see Pragmatics) and the subjective factor associated with them. At the end of the 40s. 20th century Some representatives of this direction (the first was Wittgenstein) pointed out the insufficiency of a theory that limits the functions of a sentence to the assertion of the truth of a judgment. Wittgenstein, whose concept formed the basis of the views of linguistic philosophy (G. Ryle, P. Geach, P. F. Strawson, J. Austin and others), turned to the logical analysis of ordinary language observed in its everyday functioning.

The influence of logical and philosophical trends was reflected in the development of theoretical linguistics in the 60-80s, adding to the range of problems being studied, the analysis methodology, the system of concepts used and the metalanguage. In linguistics, directions have been identified, one of which gravitates towards the actual logical analysis of natural language, the other studies the logical aspect of the use of language, communication, etc. This latter has become closer to sociolinguistics and psycholinguistics and has practically united with the philosophy of ordinary language, which has evolved towards linguistic issues.

  • Jacob L.-G., Outline of a universal grammar, St. Petersburg, 1812;
  • Davydov I.I., Experience in a general comparative grammar of the Russian language, St. Petersburg, 1852;
  • Aksakov K.S., Experience of Russian grammar, M., 1860;
  • Bally Sh., General linguistics and issues of the French language, trans. from French, M., 1955;
  • Russell B., History of Western Philosophy, trans. from English, M., 1959;
  • his, Human cognition, [trans. from English], M., 1957;
  • Wittgenstein L., Logical-philosophical treatise, trans. from German, M., 1958;
  • Buslaev F.I., Historical grammar of the Russian language, M., 1959;
  • Carnap R., Meaning and Necessity, trans. from German, M., 1959;
  • Panfilov V.Z., Grammar and Logic, M.-L., 1963;
  • Stepanov Yu.S., Modern connections between linguistics and logic, “Questions of Linguistics”, 1973, No. 4;
  • his, Names. Predicates. Proposals, M., 1981;
  • Popov P.S., Styazhkin N.I., Development of logical ideas from antiquity to the Renaissance, M., 1974;
  • Paducheva E.V., On the semantics of syntax, M., ;
  • hers, Statement and its correlation with reality, M., 1985;
  • Arutyunova N.D., Logical theories of meaning, in the book: Principles and methods of semantic research, M., 1976;
  • Frege G., Meaning and denotation, trans. from German, “Semiotics and Informatics”, 1977, c. 8;
  • Petrov V.V., The problem of indication in the language of science, Novosibirsk, 1977;
  • History of linguistic teachings. Ancient World, L., 1980;
  • NZL, in. 13, Logic and linguistics, M., 1982;
  • History of linguistic teachings. Medieval Europe, L., 1985;
  • Stepanov Yu.S., In the three-dimensional space of language, M., 1985;
  • NZL, in. 18, Logical analysis of natural language, M., 1986;
  • Du Marsais C. Ch., Logique et principes de grammaire, P., 1879;
  • Robins R. H., Ancient and Mediaeval grammatical theory in Europe..., L., 1951;
  • Pinborg J., Die Entwicklung der Sprachtheorie im Mittelalter, Kph., ;
  • Bursil-Hall G. L., Speculative grammars of the Middle Ages. The doctrine of partes orationis of the Modistae, The Hague - P., 1971;
  • Ashworth E. J., Language and logic in the post-medieval period, Dordrecht, 1974;
  • La grammaire générale (des modistes aux ideologues), , 1977;
  • Hunt R. W., The history of grammar in the Middle Ages, Amst., 1980;
  • Coxito A., Lógica, semântica e conhecimento, Coimbra, 1981.

As noted, logical-linguistic and semiotic models represent the next higher level of models. It is characteristic that for this class of models there are several almost synonymous names:

Logical-linguistic models;

Logical-semantic models;

Logical-semantic models;

Semiotic representations.

This type of model is characterized by a higher degree of formalization. Formalization primarily affects the logical aspect of the existence/functioning of the modeled system. When constructing logical-linguistic models, the symbolic language of logic and the formalism of graph theory and algorithms are widely used. Logical relationships between individual elements of the model can be displayed using the expressive means of various logical systems (a brief description of which was given earlier in this book). Moreover, the severity of logical relations can vary widely from the relations of strict determinism to the relations of probabilistic logic. It is possible to construct logical-linguistic models on the basis of several formal-logical systems, reflecting various aspects of the functioning of the system and knowledge about it.

The most common way of formally representing logical-linguistic models is a graph. A graph is a formal system designed to express relationships between elements of an arbitrary nature, operating with model objects of two types: a vertex (point), symbolizing an element, and an edge (arc, connection), symbolizing the relationship between the elements connected by it . In a mathematical interpretation, a graph is a formal system described as G=(X,U), where X is the set of vertices, U is the set of edges (arcs). The graph consists of ordered pairs of vertices, and the same pair can appear in the set U any number of times, describing different types of relationships. A classic example of a graph is shown in Fig. 2.4.

Figure 2.4 - Example of a transition graph.

There are several types of graphs, among which, if we imagine the classification of graphs in the form of a hierarchy, the largest classes (the second layer of model objects in the pyramid from the top) are directed, undirected and mixed graphs. Depending on whether the relationship displayed on the graph by a line is reversible or irreversible, the terms “edge” (unoriented, reversible relationship - displayed by a regular line) or “arc” (oriented, irreversible connection - displayed by an arrow) can be used to name the line.

As an example of a graph, you can also use the familiar hierarchical classifications in the form of rectangles connected by lines, metro maps, technological maps, etc. documents.

For logical-linguistic models, the role of graph vertices is played by atomic (primitive) or complex statements in natural language or symbols that replace them. Connections can be marked in different ways in order to most fully characterize the type of connection (relationship). In particular, arcs can also reflect the presence of functional dependencies, operational connections (input situation - operation - output situation) - in these cases, arcs are marked in a special way.

One type of logical-linguistic models are scenarios or scenario models. Scenario models (scenarios) are a type of logical-linguistic models designed to display sequences of interconnected states, operations or processes unfolded in time . Scenarios can have either a linear or branching structure, in which the conditions for transition to a particular strategy can be established, or possible alternatives can simply be displayed without specifying conditions. The requirement of interconnectedness in relation to scenario models is not strict and is rather conditional in nature, since it is established on the basis of subjective judgments of experts, and is also determined by the specifics of the formulation of activity goals. So, if you, the reader, want to include in a certain scenario model reflecting the dynamics of the events that followed the terrorist attacks of September 11, 2002, only the USA and Afghanistan are your right, but if you want to include all oil-producing countries among the players, then here no one can judge you or dissuade you. Scenarios , as a type of logical-linguistic models, widespread in industries related to modeling the socio-political, economic and military situation, creating information systems to support management activities and many others .

It should be noted that in some cases it is difficult to draw the line between a scenario model and an algorithm. However, there is a fairly significant difference between the scenario model and the algorithm, and it lies in the fact that an algorithm is a set of instructions, the execution of which should lead to some result , while scenario model - this is not necessarily an algorithm, for example, it may represent a record of events, the repetition of which in the same sequence will not necessarily lead to the same situation as the previous time . That is, the concept of a scenario model is a broader concept than the concept of an algorithm. The concept of an algorithm is associated with an operational approach to modeling, and an algorithmic approach to the analysis of cause-and-effect relationships has much in common with determinism (however, many algorithms provide procedures for handling various exceptional situations - up to refusing to make a decision). The scenario model imposes less stringent restrictions on the nature of cause-and-effect relationships.

Another important type of logical-linguistic models are logical-semantic (semantic) models. Logical-semantic (semantic) models are a type of logical-linguistic models, focused on displaying the phenomenon (problem) being studied, the solution being developed or the object being designed through a certain set of concepts expressed in natural language, fixing the relationships between concepts and displaying meaningful and semantic connections between concepts . It is characteristic that using the same apparatus, this type of logical-linguistic models is focused on a slightly different type of activity - namely, on the search for a solution, its synthesis from previously existing precedents, existing descriptions of the subject area or descriptions of ways to solve a group of problems that are similar in content.

Essentially, this modeling method is a method of finding a solution to a certain set of problems based on an analysis of the body of formalized knowledge about a certain complex system. Conventionally, the application of this method can be described as a cyclically repeated sequence of two procedures: the procedure for constructing a system of statements reflecting knowledge about the system, and the procedure for analyzing the resulting body of knowledge using a computer (however, at certain stages of the implementation of the method, the participation of an expert is required).

Knowledge about the system is represented in the form semantic network, reflecting a set of elements of information about the system and connections reflecting the semantic proximity of these elements . The method of logical-semantic modeling was developed in our country in the first half of the 1970s as a tool for preparing, analyzing and improving complex decisions made at various levels of sectoral and intersectoral management based on semantic analysis of information. The following two areas of application of logical-semantic modeling are distinguished:

Formation and evaluation of design solutions;

Analysis and optimization of organizational structures.

The elements of the logical-semantic model are statements in natural language (cognitive elements) and the connections that exist between phenomena and objects that reflect these statements. From a set of cognitive elements and connections, a network is obtained that describes the problem area.

A semantic network is a type of model that displays many concepts and connections between them, determined by the properties of the modeled fragment of the real world. In general, a semantic network can be represented as a hypergraph in which vertices correspond to concepts and arcs correspond to relationships. This form of representation makes it easier to implement many-to-many relationships than a hierarchical model. Depending on the types of connections, classifying, functional networks and scenarios are distinguished. Classifying semantic networks use structuring relations, functional networks use functional (computable) relations, and scenarios use cause-and-effect (causal) relations. A type of semantic network is a frame model that implements the “matryoshka” principle of revealing the properties of systems, processes, etc.

Logical-semantic models allow you to form thematically coherent descriptions of various aspects of the problem (as well as the problem as a whole) and conduct a structural analysis of the problem area. Thematically coherent descriptions are obtained by isolating from the totality of cognitive elements of the logical-semantic network some of those that directly relate to a given topic. As a particular example of the application of logical-semantic modeling, we can consider hypertext systems that have become widespread in the global telecommunications network Internet.

Cognitive elements can be not only knowledge, but also statements of a different nature, for example, descriptions of individual tasks. In this case, logical-semantic models can be used to solve the problem of identifying and analyzing interrelated sets of tasks, their decomposition and aggregation, and to build trees of goals and tasks.

The logical-semantic model is represented as a connected undirected graph in which the vertices correspond to statements, and the edges correspond to semantic connections between them. The characteristics of the graph are used to study the logic-semantic network. The use of this method of representation allows us to introduce metrics of semantic proximity of cognitive elements and assessments of their significance. So, for example, the number of connections closing on one element (vertex valence) is considered as an expression of the element’s significance, and the length of the path from element to element, measured in network nodes, is considered as the semantic proximity of elements (significance relative to some element).

Logical-semantic modeling makes it possible to identify, based on the analysis of texts formulated by various experts, hidden dependencies between various aspects of the problem, the relationship of which was not indicated in any of the proposed texts, as well as to produce an objective ranking of problems and tasks according to their importance. Graph analysis allows you to detect the incompleteness of the model and localize those places that need to be replenished in the system of connections and elements. This becomes possible thanks to the construction of an interconnected system of statements about the subject area of ​​an object and the automated selection and structuring of statements characterized by semantic proximity.

Thanks to the use of means of accumulating logical-semantic models, knowledge obtained in solving similar problems in related fields of activity can be actively used, that is, the principle of historicity in decision-making can be implemented. This leads to a gradual reduction in the labor intensity of the processes for synthesizing new logical and semantic models.

The methods of logical-linguistic modeling are not limited to those listed here. It is worth mentioning the methods of logical-linguistic modeling of situations based on the analysis of message flow, developed by one of the authors of this book, P.Yu. Konotopov, which will be discussed further, methods of logical-linguistic modeling of business processes, methods for synthesizing trees of goals and objectives, as well as other methods based on the use of logical-linguistic models and methods. Logical-linguistic models are widely used in software development, corporate information resource management and many other industries where a certain level of formalization is required, representing the unity of rigor, intuitiveness and high expressiveness of the models.

LOGIC MODELS

Logical models represent the next level of formal representation (compared to logical-linguistic ones). In such models, natural language statements are replaced by primitive statements - literals, between which relationships prescribed by formal logic are established.

There are logical models in which various schemes of logical relations are considered: relations of logical consequence, inclusion and others, which replace the relations characteristic of traditional formal logic. The last remark is related to the variety of non-classical logical systems in which the relations of traditional logic are replaced by alternative ones or expanded to include relations of varying degrees of rigor (for example, relations of non-strict temporal precedence or succession). Here we should refer to a more consistent and complete description of logical systems of various kinds given in special sources.

When talking about logical models, it is difficult to ignore the terminology of logic. However, in this section we will not provide a strict thesaurus of logic, but will give a fairly free interpretation of some commonly used terms. First of all, let us introduce the concept of a statement. Statement or literal - this is a certain linguistic expression that has meaning within the framework of a certain theory, regarding which it can be argued that it is true or false (for classical logic this is so). Logical operation is the operation of constructing a new statement from one or more statements. To write logical formulas, use propositional variables (they are replaced by statements), ligaments (denoting the type of relationship being established) and metacharacters , controlling the process of parsing the formula (parentheses of various kinds, etc.). Syllogism is a system of logical formulas consisting of two initial premises ( antecedents ) and consequences ( consequent ). Such logical systems have been the basis for the construction of traditional logical reasoning since the time of Aristotle. An extension of such a logical system is a system consisting of several syllogisms, called polysyllogism or sorites . In such a system, no restrictions are imposed on the number of initial premises and conclusions, but the ratio of their number (provided that the system of statements does not contain contradictions) is subject to the condition that the number of conclusions cannot exceed the number of initial premises.

In accordance with the last remarks, when considering logical models, two types of models should be distinguished: models solved by a syllogical scheme, and models solved by a polysyllogical scheme. The first method of analyzing a system of statements requires rather cumbersome logical calculations, for which it is difficult to implement procedures for reducing enumeration operations, since pairs of statements must be selected based on the application of semantic criteria (otherwise, you will get a problem made up of statements like: “elderberry in the garden = True, and in Kiev - guy = False" - drawing conclusions from such a system of premises is a thankless task). For polysyllogism models, there are methods to reduce calculations, but insufficient attention is currently paid to the issues of methodological and technological support for solving polysyllogisms. Today, a relatively small number of scientists are engaged in theoretical and applied issues related to the solution of polysyllogical problems, among whom are our compatriots B.A. Kulik and A.A. Zenkin. The relevance of methods for solving polysyllogisms is explained by the growing needs associated with the analysis of message flows that potentially contain contradictory statements or provide incomplete argumentation, for the analysis of which it is advisable to use methods for solving polysyllogisms.

It must be said that one of the methods for solving polysyllogisms was proposed by the mathematician and logician Charles Dodgson (literary pseudonym - L. Carroll), who abundantly “littered” sorites in his books “Alice in Wonderland”, “The Story of Knots” and others.

So, for example, consider the following Carroll polysyllogism:

1) “All little children are unreasonable.”

2) “Everyone who tame crocodiles deserves respect.”

3) “All unreasonable people do not deserve respect.”

It is necessary to determine what follows from these premises.

Trying to solve a similar problem within the framework of Aristotelian syllogistic, we would have to sequentially select suitable pairs of propositions and derive consequences from them until all possibilities are exhausted. Given the growing number of statements, this would turn out to be an extremely difficult task, the result of which does not always lead to an unambiguous conclusion.

L. Carroll developed an original method for solving polysyllogisms. The initial stage of solving such problems can be presented in the form of the following sequence of operations (these stages are present both in L. Carroll and in the methodology of B.A. Kulik):

- definition of the basic terms that make up the system of premises;

- an introduction to notation system terms;

- selection of a suitable universe (a set covering all mentioned objects).

In the example given, the main terms of this problem are: “small children” (C), “reasonable people” (S), “those who tame crocodiles” (T) and “those who deserve respect” (R). Obviously, these basic terms represent some sets in the universe of “people”. Their negations, respectively, will be the following terms: “not small children” (~C), “unreasonable people” (~S), “those who do not tame crocodiles” (~T) and “those who do not deserve respect” (~R ). The universe for this system will be the set of all people (U).

Essentially, we have formed a system of elements of a formal description of the subject area, reflected in polysyllogism. Let's complete the example using B.A.'s approach. Kulik (to read the symbolic record, just remember your school years)...

So, (the sign symbolizes the relation of inclusion of sets). - This is exactly what a record of the basic judgments of a sorites will look like. I remember from my school years that the operation of inverting the signs of both sides of an inequality leads to interesting results (transforming the “more” sign into a “less” sign, etc.). In our case, such an analogy is quite appropriate: the negation operation placed in front of each of the terms will lead to the inversion of the inclusion relation, that is, we get: . That is, “All reasonable people are not little children,” etc. Next we get:

So, we get: “All little children do not tame crocodiles” and “All who tame crocodiles are not little children.” Readers can decipher other statements on their own.

Logical models are widely used to describe knowledge systems in various subject areas, and the level of formalization of the description in such models is significantly higher than in logical-linguistic ones. It is enough to note that one statement (cognitive element) of a logical-linguistic model, as a rule, corresponds to several statements of the logical model.

Often, along with classical logical formalism, such models use formal tools of set theory and graph theory, which serve to expand the capabilities of describing and representing relationships in logical models. Here their similarity with logical-linguistic models can be traced. Just like logical-linguistic models, Logical models allow for qualitative analysis , however, being supplemented with formal means and methods of other branches of mathematics (which is done quite easily, since logic is a metalanguage for both natural language and artificial languages ), Logical models allow for rigorous numerical analysis .

Logical models are most widely used in the field of building artificial intelligence systems, where they are used as the basis for producing logical conclusions from a system of premises recorded in the knowledge base in response to an external request.

Limitations associated with the specifics of the subject area (fuzziness and incomplete expert knowledge) have led to the fact that in recent years quasi-axiomatic logical systems have become especially popular in the industry of building artificial intelligence systems (an approach developed by the domestic scientist D.A. Pospelov). Such logical systems are obviously incomplete and do not meet the full range of requirements characteristic of classical (axiomatic) systems. Moreover, for the majority of logical statements that form such a system, a domain of definition is specified, within which these statements retain their significance, and the entire set of statements on the basis of which the analysis is carried out is divided into generally valid statements (valid for the entire model) and statements that have significance only within the framework of a local system of axioms.

The same reasons (incompleteness and vagueness of expert knowledge) made popular such areas of logic as multivalued logics (the first works in this area belonged to Polish scientists J. Łukasiewicz and A. Tarski in the 1920s and 30s), probabilistic logics and fuzzy logics (Fuzzy Logic - author of the theory L. Zadeh - 1960s). This class of logics is actively used in the synthesis of logical models for artificial intelligence systems intended for situational analysis.

Since most of the knowledge and concepts used by humans are fuzzy, L. Zadeh proposed the mathematical theory of fuzzy sets to represent such knowledge, which allows one to operate with such “interesting” sets as a set of ripe apples or a set of serviceable cars. Fuzzy logic operations were defined on such interesting sets.

Systems using fuzzy logic models are developed specifically to solve ill-defined problems and problems using incomplete and unreliable information. The introduction of fuzzy logic apparatus into the technology of creating expert systems led to the creation of fuzzy expert systems (Fuzzy Expert Systems).

Fuzzy logic has become especially popular in recent years, when the US Department of Defense began to seriously fund research in this area. Nowadays the world is experiencing a surge of interest in analytical software products created using fuzzy logic methods and fuzzy logic models. True, it is already difficult to call these models logical - they widely use multi-valued probabilistic relations of measure and membership instead of the traditional mathematical apparatus of binary logic. Fuzzy logic allows you to solve a wide class of problems that cannot be strictly formalized - fuzzy logic methods are used in control systems for complex technical complexes operating in unpredictable conditions (aircraft, precision weapon guidance systems, etc.).

Many foreign analytical technologies, due to export restrictions, are not supplied to Russian markets, and tools for independent application development are the know-how of manufacturing companies - it is more economically profitable to supply ready-made applications than to create an army of competitors (especially in countries with “cheap "brains).

Essentially, logical models represent the last stage of formalization, at which concepts formulated in the language of human communication can still act as elements of a statement. But as we have seen, elements of formal systems are already actively intervening in logical methods, which will be discussed further.

Here we will have in mind languages ​​specially created by logic as a means of precise analysis of certain thinking procedures and, mainly, logical conclusions of some statements from others and proofs of statements. Before we begin to describe special logical languages ​​(propositional logic language - YLP and predicate logic language - YLP), it is useful to note some of their features in comparison with ordinary (colloquial, national) languages; At the same time, we will keep in mind the language of predicate logic, as richer in its expressive capabilities in comparison with the language of propositional logic.

1. YLP is an artificial language; it is intended for certain purposes (for example, for the axiomatic construction of theories, for analyzing the content of natural language statements and identifying the logical forms of statements, as well as concepts, relationships between statements and concepts, for describing the rules of reasoning, forms of conclusions and evidence).

    If in ordinary (natural) languages ​​three semiotic aspects are distinguished - syntactic, semantic and pragmatic - then in languages ​​that are subject to description there are only syntactic and semantic aspects. As mentioned earlier, the presence of a pragmatic aspect in natural languages ​​is associated with the uncertainties encountered in them and the absence of certain rules (the semantic ambiguity of some expressions, and mainly the lack of precise rules for constructing their expressions, for example, sentences). There are no uncertainties in YLP; it has precise rules for the formation of analogues of natural language names (terms) and analogues of its narrative sentences (formulas), as well as precise rules that determine the meanings of its expressions. Languages ​​of this kind are called formalized.

    In a natural language, along with that part of it that is intended to describe extra-linguistic reality (the objective part of the language), there are words denoting expressions of the language itself (“word”, “sentence”, “verb”, etc.) and sentences, in which assert something related to the language itself (“Nouns change according to cases”). Such languages ​​are called semantically closed. In artificial languages ​​of logic there is only an objective part; more precisely, they contain only means for describing some reality external to it. Everything that is used to characterize the expressions of this language itself and is necessary in its description is separated into a special language. The language being described (in this case, YLP or YALV) is called an object language, and the language used to describe, analyze, etc. is called a metalanguage in relation to the given (object).

    YLP (like YALV) is usually characterized as a symbolic language, because special symbolism is used here, primarily to indicate logical connections and operations. Special symbols are also used as signs to designate objects, properties and relationships. The use of symbolism helps to reduce the recording of statements and makes it easier, especially in complex situations, to understand the meaning of the corresponding statements.

5. A characteristic feature of YLP and YALW - for systems of so-called classical symbolic logic - is their extensional nature. For YLP it consists in the fact that the subject values ​​of its terms (analogues of natural language names) depend only on the subject values ​​of their components, and the true values ​​of complex formulas depend on the truth values ​​of the latter’s components. The same applies to YALV. Generally speaking, the extensionality of these languages ​​lies in the fact that the objective meanings of the analogues of complex names of a natural language in them depend only on the objective meanings, but not on the meanings of their components, and the truth values ​​of the analogues of complex statements of a natural language depend on the truth values ​​(but again not from the meanings) of their components. This is expressed, for example, in the fact that the properties and relations between objects in the composition of statements are considered (or at least can be considered) as certain sets of objects - the volumes of the corresponding properties and relations. And also that it is permissible to replace any part of the complexity of a statement, which in turn is a certain statement, with any other statement with the same truth value.

The most important thing for these languages ​​is the presence of precise rules for the formation of its expressions and the assignment of meanings to them, and especially that each is significant

the form acquires a certain meaning. In natural In the same language we have such expressions (sign forms) that in different cases of their use have different semantic contents. So, for example, the expression “all the books in this library” has clearly different meanings when used: “all the books in this library are written in Russian” and “all the books in this library weigh 2 tons.”

An important feature of YLP is also the direct correspondence between the structures of its sign forms (formulas) and the structures of the meanings they express. Correspondence consists in the fact that each essential part of the structure of meaning corresponds to a certain part of the sign form. Thus, in the structure of the meaning of a simple narrative sentence, that is, in the structure of a simple statement, it is necessary to distinguish, for example, individual objects or classes of objects about which something is stated in the statement

(in symbolic forms they correspond to single or general names), as well as properties or relationships, the presence of which is also stated in the corresponding objects (predicators are used as signs for them in YLP).

Reasoning carried out in natural language taking into account the meanings of linguistic expressions and representing, in essence, operations with these meanings (with mental objective situations), can be presented in a formalized language as operations with sign forms of statements. These operations are carried out according to rules of a formal nature, “formal” in the sense that for their application it is necessary to take into account only what signs the sign forms are made of and in what order these signs are arranged. It is clear that such a possibility of abstracting from the meanings of statements when describing the forms of correct reasoning is necessary for the automation of many intellectual processes and is a condition for ensuring maximum accuracy in constructing scientific conclusions and evidence, which at the same time always become verifiable.

People who are not familiar with modern formal logic often have the opinion that, when dealing with special formalized languages, it studies special forms of reasoning precisely in these languages. However, there are no special forms of this kind. Formalized languages ​​are only a means of highlighting various types of relationships between things, which represent the logical contents of statements and determine the forms of correct reasoning in any processes of cognition.

The language of predicate logic, as we will see below, is the result of a certain reconstruction of natural language, the purpose of which is to bring the logical forms of statements into correspondence with their sign forms: the linguistic forms of this language adequately express the semantic structures of statements, which is by no means always, as already emphasized, takes place in natural language.

The language of propositional logic is the result of some simplification of the linguistic language due to the fact that it does not take into account the structure of some statements. This circumstance leads to the emergence of a new semantic category that is absent in natural language, namely, proposi -

national signs (symbols, variables): p v p 2 , R at ..,R P , intended to designate certain statements without taking into account their internal structure. It is important that here (in LSL) the composition of simple statements, their subject-predicate structure is not revealed, but only the logical forms of complex statements are revealed. Since this language has a simpler structure, it is methodically more expedient to begin considering artificial languages ​​of logic with it.