#nomological determinism
Explore tagged Tumblr posts
Text
I JUST FOUND OUT THAT NOMOLOGICAL DETERMINISM IS A WORD THAT EXISTS AND THAT OTHER PEOPLE HAVE THOUGHT ABOUT! THERE’S A WORD FOR LIKE MY ENTIRE WORLDVIEW! THAT’S SO COOL!
#/hyp to the entire worldview bit but OH MY GOD THIS IS A LEGIT CONCEPT OH MY GOD /POS#causal determinism#nomological determinism#i've always referred to this as 'dice theory' bc i've always described it to people as like...#the whole world is a table and everybody is a die and if you knew the materials of the dice and the air bubbles and#the paint chips and the exact air pressure and the angle and force and spin with which the dice are thrown#then you would be able to predict the rolls and then when the dice hit the table they also dent the table in certain ways which#influence the way future dice will roll and also the dice themselves are dented#which influences the way they'll roll in the future#i've always explained it like THAT because i thought i made up the idea but NO OH MY GOD#IT EXISTS TO OTHER PEOPLE!#dante dicit#philosophy#all caps
0 notes
Text
Towards a nomenclature of Realism
A few times in the last year I've found myself idly wondering whether analogical descriptive-realism and nomological descriptive-realism (which are my two preferred ways of specifying the relationship between reality and language, and hence of stating a descriptive-realist theory) are equivalent or not. If they're not equivalent, then do some theories involve both?
(If you haven't thought about the distinction in the way I have, bear with me and I'll explain it to you. This is worth doing because I don't think other people have done it, and I can't tell if people are using the words analogical or nomological in non-standard ways. Hence, I'm just gonna be extremely pedantic and insist that I mean these things precisely as they're used in like three different disciplines over a span of 70 years.)
The analogical/nomological distinction means something like "are there things that exist such that, to talk about them, we must use the same sort of proposition as is used to talk about each other?" Since people are often doing philosophy of language, the question can be translated as "are we using the same predicate when we talk about A and B?"
The analogical/nomological distinction, as it applies to (1) the relation between the propositional content of language and the world, i.e. what exists and what properties and in what ways, is a very old one. Philosophers have been arguing over it in one way or another, over whether it's a real distinction or not, since at least Descartes. It pops up every once in a while in analytic philosophy, under the terms "analyticity," "disquotationalism," and "descriptivism," in the 20th century, and again as a central part of David Lewis's view in the 1980s. Every once in a while it has a little flurry of interest, and then people stop writing about it for a while.
In its analytical philosophy version, the debate is whether or not there is some "way" or "property" that all the things in the world "have in common." What is it? Does it have an individual determination? For instance, does it consist in being red, or hot, or happy, or human? Or does it consist in something more fundamentally abstract? For instance, does it consist in being dependent on a physical substrate, or consisting of particles?
The nomological/analogical distinction is also important in science. Specifically, it's central to the problem of how we can know things about the world. Is it by putting it into words -- nomologically? Or by observation, analogically? In this context the same problem is ontological: is there some kind of thing, or things, of which the world consists? Can you identify what all of them have in common? Or is there a more fundamental, abstract kind of thing they're made of, which is not themselves? (The logical positivists and other members of the logical family
4 notes
·
View notes
Link
an Italian philosopher, physician and free-thinker, who was one of the first significant representatives of intellectual libertinism. He was among the first modern thinkers who viewed the universe as an entity governed by natural laws (nomological determinism). He was also the first literate proponent of biological evolution, maintaining that humans and other non-human apes have common ancestor.
tongue cut out, strangled, burned.
70 notes
·
View notes
Text
Compatibilism & Metaphysical libertarianism
“Metaphysical libertarianism (hereafter ML) is the doctrine that human beings possess free will, that free will is incompatible with determinism, and that determinism is false. Its nomenclatural affinity with political and economic libertarianism (hereafter PEL) is by no means accidental, since, as I am going to argue, the viability of the latter depends on the viability of the former.
I believe that no argument is needed to convince the readers that the so-called “hard determinism,” which rules out free will, and hence also independent personal choice, is incompatible with PEL. On the other hand, the “soft” variety of determinism, known under the name “compatibilism,” is oftentimes claimed to be reconcilable with PEL. According to compatibilism, the assumption that every event (including every event of personal choice) is causally necessitated by antecedent events, and the resulting conclusion that nobody could ever have chosen otherwise than he in fact did, are perfectly compatible with laissez-faire.
I think this is mistaken—I remain convinced that as soon as one grants that every human decision can be traced back to factors beyond one’s control (e.g., genetic makeup, environmental influences, personal upbringing etc.), the notions of sovereign choice and personal liberty become empty. For instance, the above concession enables the so-called “luck egalitarians” to claim that an adult man who remains unemployed on a free market ended up in such a situation involuntarily, since, e.g., his lack of appropriate competences, and his lack of willingness to gain any, are determined by genetic and environmental factors, over which he had no control—consequently, his predicament should count as an instance of “brute luck” rather than “option luck,” and thus should be offset by welfare subsidies.1
The internal logic (or lack thereof) and exact political implications of the above scenario need not concern us here, however. In this paper, I shall chiefly focus on a different, even more thoroughgoing and illustrative example... [...]
...In sum, compatibilism collapses into hard determinism.
But I do not believe that accepting this conclusion should make us hard determinists. Let me therefore outline my final, tentative, libertarian suggestion. The crucial question that needs to be asked in this connection is: could Laplacean calculators really work? If given the exact initial data of the universe, could they really reveal to their users the complete history of cosmic events? This is precisely what I wish to dispute; my claim is that there is nothing nomologically impossible in the notion of an event which is neither determined nor random, an encounter with which would inevitably crash the fantastic devices in question.
Imagine a being (Z) confronted with an apple tree and a pear tree. To make the scenario somewhat simplified, let us suppose that Z possesses two sets of brain cells, 10 brain cells each, responsible for producing a taste for apples and pears respectively. Thus, the intensity of the corresponding food-desires is fully equal. Further, let us assume that the environment (other than the trees) and Z’s mental history either do not influence these two desires or influence them to an equal degree. And finally, let us suppose that Z is not wont to establishing a preference in problematic cases by resorting to such procedures as coin flipping. I do not think that any of these assumptions involves a nomological impossibility.
Now, the essential question to ask is: what will Z choose, pears or apples? Can any Laplacean device tell us that? It seems to me that it cannot: the chances are even, and the outcome is not determined. But, contrary to what the friends of compatibilism might suggest, it is not random either. If it were to be truly random, then some quantum trigger could cause Z to do virtually anything: recite a poem, do a somersault, climb up and down the tree without picking any fruit etc. Surely, this is not what we should expect. The crux of decision-driven causation is that the range of relevant options is determined by the data of the environment and the agent’s psychology 6 (the agent clearly has to have something to choose from), but it is up to him which of these options to pick...” — Jakub Wisniewski, Free Will & Preactions, Libertarian Papers 1, no. 23 (2009): 1–9. Editors note: originally came across Oisin Deery - Philosopher at Monash who wrote: “The Fall from Eden: Why Libertarianism Isn’t Justified by Experience,” Australasian Journal of Philosophy - which made little sense before the above cleared the issue up.
#Compatibilism#metaphysical#libertarianism#philosophy#determinism#free will#jakub wisniewski#Laplacean#PEL#ML
2 notes
·
View notes
Text
FORECASTING IN LAW
New Post has been published on https://www.aneddoticamagazine.com/forecasting-in-law/
FORECASTING IN LAW
FORECASTING IN LAW
Table of Contents
Foreword
Chapter I – Definition of Theory and Law
1. Definition of Law
2. Definition of Theory
3. Theory and Law in Science
3.1 Hypothesis
3.2 Theory
3.3 Theorem
3.4 Tautology
4. Common Law, Civil Law, and Positive Law
4.1 The Common Law
4.2 The Civil Law
4.3 The Positive Law
Chapter II – Definition of Forecasting
2.1 Definition of Forecasting
2.2 Forecasting in Determinism
2.3 Forecasting in Game Theory
2.4 Forecasting in Chaos Theory, and Non-linear Dynamics
Chapter III – Forecasting in Law
3.1 Determinism in Law
3.2 Uncertainty in Law
3.3 Non-linear Dynamics in Law
3.4 Relations of Order and Equivalence in a Body of Law
Chapter IV -First Conclusion: Law as Power Imposition
4.1 Law as Commandments of a Revealed Knowledge
4.2 Law as Principles Governing Practice
4.3 Law as Power Imposition
Chapter V – Second Conclusion: Law as Scientific Theory
5.1 Law as a Hypothesis
5.2 Law as a Theory
5.2 Law as a Theorem: Commandments as Axioms
5.3 Law as a Tautology
Foreword
In the proceeding of this work we will use logic.
Following the definition given by Professor Douglas Dowing of the School of Business and Economics at Seattle Pacific University, ‘Logic is the study of sound reasoning’. Therefore, Logic is the essential means of Science to determine whether a Law (Hypothesis, Theory, and Theorem) can be true or false.
The analysis of logic focuses on the study of arguments. An argument is a sequence of sentences (called premises) that lead to a resulting sentence (called the conclusions). Any argument is valid if the conclusion does follow from the premises. In other words, if an argument is valid and all its premises are true, then the conclusion must be true.
Logic can be used only to determine whether an argument is valid, and it cannot determine whether the premises are true or false. Once an argument has been shown to be valid, then all other arguments of the same general form will also be valid, even if their premises are different.
Arguments are composed of sentences. Sentences are said to have the truth value T (corresponding to what we normally think of as ‘true’) or the truth value F (corresponding to ‘false’).
In studying the general logical properties of sentences, it is customary to represent a sentence by a lower-case letter such as p, q, or r, called a sentence variable or a Boolean variable. Sentences either can be simple sentences or can consist of simple sentences joined by connectives and called compound sentences.
In this work Logic will be used to determine whether: (i) a Law has been constructed in a rational way; (ii) the premises of a Law can be defined as scientific statements; (iii) premises can be defined as hypothesis, theory, or theorem.
The scope of this work is to try to generate a nomology, i.e. a scientific process, which deals with the study of hypothesis, theories and theorems.
CHAPTER 1
Definition of Theory and Law
1. Definition of Law
Law was defined by Blackstone as a rule of action prescribed or dictated by a superior, which an inferior was bound to obey.
Austin described a law as being a command to a course of conduct; a command being the expression of a wish or desire conceived by a rational being that another rational being should do or forbear, coupled with the expression of an intention in the former to inflict some evil on the latter, in case he did not comply. But besides laws properly so called, Austin alluded to laws improper, imposed by public opinion; also laws metaphorical or figurative, e.g. the laws regulating the movements of inanimate bodies, or the growth or decay of vegetables; or that uniformity in the sequence of things or events in which often goes by the name of law. Law was sometimes used as opposed to equity; now, however, by the Supreme Court Act 1981, s. 49, full effect is given to all equitable rights in all branches of the Supreme Court and in inferior courts.
To our opinion and understanding:
1.1 A law can be:
A revelation of a supreme will (as the revelation of the divine will set forth in the Old Testament, in the Koran, or in any other Sacred Scripture).
Principles governing practice (as in a profession of arts, or in an administrative regulation).
A statement of the observed regularity of nature, i.e. a rule or principle stating something that always works in the same way under the same conditions.
The science that deals with the interpretation and application of hypothesis, theories and theorems.
The real improvement or the engineering of any existing natural condition.
1.2 A law can be stated as:
Commandments
Customs
Uniform Rules governing practice
A set of general principles drawn from any body of facts or abstract thought (as in science);
A set of general principles drawn to apply, improve or manipulate any body of facts or abstract thought;
1.3 Commonly a law can degenerate into:
The control brought about by enforcing rules (forces of law and order).
A power imposition.
2. Definition of Theory
2.1 Commonly a theory can be:
A statement of the observed regularity of nature, i.e. a rule or principle stating something that always works in the same way under the same conditions.
The science that deals with the interpretation and application of hypothesis, theories and theorems.
The real improvement or the engineering of any existing natural condition.
2.2 A theory can be stated as:
A set of general principles drawn from any body of facts or abstract thought (as in science);
A set of general principles drawn to apply, improve or manipulate any body of facts or abstract thought;
Logically, therefore, any theory is a law when it is not stated as (i) commandments; (ii) customs; (iii) uniform rules for governing practice; (iv) force of law and order; (v) imposition of power.
Any theory is an argument, i.e. a sequence of sentences (called premises) that leads to a resulting sentence (conclusion).
An argument is valid if the conclusion does follow from the premises, and if the reasoning uses a formal language, whose syntax is expressed through logic operators (connectors, quantifiers, modal operators) that can eliminate ambiguity of common language. In other words, if an argument is valid, and all its premises are true, then the conclusion must be true.
A true theory is stated through a theorem, which is the logical process by which verity is deducted from the premises (axioms and/or other theorems) of the theory itself by means of a formal language (Mathematics, Drawings, Verbal Language, etc.)
Logic is pertinent to the demonstration process of a statement, whilst science is pertinent to the demonstration process of any premise.
3. Theory and Law in Science
In science theory and law are terms that indicate the same subject, i.e. the statement of a more or less plausible or scientifically acceptable general principle offered to explain observed facts.
In Physics, Ohm’s law and Ohm’s theory indicate the same statement about principles of electricity.
Therefore, in science law and theory have the same worth when they do not represent commandments, customs, general rules governing practice, forces of law and order, and imposition of power.
General principles of electricity are accepted because their practical effects are observable and easy demonstrable in the earth system of reference. Nobody would state anything different.
General instincts to survival, reproduction, freedom, exchange, and knowledge are needs of nature, therefore they are rules of doctrine of natural law, and nobody can state that anything derived from nature can be unreal or false. Anyway, even if their effects are more difficult to be demonstrated than the effects of electricity are, therefore there will always exist someone somewhere who will issue a human law to regulate, by enforcing rules, those effects.
In physics, Newton’s theory and the general theory of relativity move to a similar gravitation law (the latter more accurately than the former.) In effect, that is the principle of correspondence, which indicates the tendency of two different physical laws to coincide when they are deducted from at least two different theories.
A few epistemologists arrive to state that the scientific assumption of any term is nothing else than the complex of empiric operations performed when they are used. Once such condition is satisfied, it is possible to state that the theory under exam is scientifically valid. Therefore, theory results verified if observable data will be effectively related between them in the same way the relations of the terms of ‘correspondence rules’ connected to data. In any contrary case we can say that the theory results falsified.
The inconvenience of this statement lays in the presumption that there always exist rules of correspondence for all the terms of a theory. In the reality that never happens, for it is possible to demonstrate that almost any scientific theory contains terms with no rules of correspondence, the so called “theoretical terms”.
Epistemologists tried to solve this problem with several modifications to a more strict empiricism, looking, overall, for shrewdness, which may give more scientific sense also to the propositions containing some theoretic term.
Anyway, it remains the fact that the concept itself of verification generates several doubts even when it is referred to statements having no theoretic term. Those doubts are due to the circumstances that no observation, as accurate as possible, can ever allow the verification of any authentic scientific law.
In fact, every scientific laws state the existence of a certain relation among variable terms in infinite dominions, so that in order to verify a law, it should be needed to verify that the same relation exists amongst an infinite number of variables (those corresponding to such variable terms), when it is obvious that data effectively reachable by observation are always in finite number.
Such a difficulty carried out the K. Popper’s doctrine of falsification, which states that (because a theory can be defined scientific) it is not necessary that it results verifiable; on the contrary, it is needed that its falsity can be proved, i.e. that in the act of definition one may indicate a few events whose verification may prove its falsity (potential falsifier). In any contrary case the theory cannot be scientific, but metaphysic.
Once the potential falsifiers have been pointed out, scientists can be charged of submitting the theory to extreme tests to verify whether it resists or not to the falsification attempt. In the case it will resist, scientists can say that it has not been “demonstrated”, but certainly “corroborated”.
Therefore, a real science is constituted of theories seriously corroborated, and we can note as any corroboration, however serious, will never result definitive, for nobody can exclude on principle that further proofs may lead to an exit in antithesis with all proofs up-to-now performed.
The existence of such plurality of epistemological tendencies makes a confirmation of the actuality of the problem. To be able to formulate a criterion of science, which no further appeals to a presumed absolute metaphysical basis of scientific knowledge, gives the precise sense to objectivity of those acknowledgements and their effective superiority as compared with pre-scientific understandings.
Natural law, positive right, and epistemology are the main philosophies that deal with the process of codification.
In the doctrine of natural law, any theory or law comes out from the observation of the regularity of nature, i.e. there exists a natural order and any codification process moves to the comprehension of the truth from its observation.
In positive right, theories or laws come from the induction of experiments, i.e. reasoning from a part to the whole or from particular to general conclusions.
In current epistemology, theories or laws are just hypothesis. The scientific revolution in mathematics and physics in the early ‘900 demonstrated that science progresses through deep crises and restructuring of its conceptual apparatuses. Therefore, in the contemporary epistemology, the problem of the definition of the criteria of scientific nature is continuously re-proposed.
Bertrand Russel and Rudolph Carnap consider a theory as scientific when all its terms can be connected to observable data through rules of “correspondence”.
Karl Popper, introducing the notion of “falsifiable”, considers a theory as scientific only if it is possible to indicate the events whose verification may prove its falsity. In any contrary case a theory will result undemonstrated but only “corroborated” (opinion supported with certain evidence).
Knowledge can be stated through: (i) Hypothesis; (ii) Theory; (iii) Theorem; (iv) Tautology.
3.1. – Hypothesis
In science a hypothesis is a proposition that is being investigated, which has yet to be proved.
The hypothesis is etymologically a proposition that is stretched under a thesis, which is a proposition that a person advances and offers to maintain by argument. Therefore, a hypothesis can be defined such as an ‘assumption’ made especially in order to draw out and test its logical or empirical consequences.
In order to verify a hypothesis, researchers use for this purpose a statistical technique known as Hypothesis Testing.
The hypothesis that is being tested is termed the null hypothesis. The opposite possible hypothesis, which says ‘The null hypothesis is wrong’, is called the alternative hypothesis.
Two basic situations can occur:
a) The null hypothesis has been rejected, but it is actually true (type 1 error)
b) The null hypothesis has been accepted, but it is actually false (type 2 error)
A good testing procedure is designed when the chance of committing either of these errors is small. Anyway, it often works out that a test procedure, which has a smaller probability of leading to a type 1 error, will also have larger probability of conducting to a type 2 error. Therefore, no single testing procedure is guaranteed to be best.
It is customary in statistics to design a testing procedure such that the probability of type 1 error is less than a specific value (often 5% or 1%). The probability of committing a type 1 error is called the level of significance of the test. Therefore, if a test has been conducted at the 5% level of significance, this means that the test has been designed so that there is a 5% chance of type 1 error, and 95% chance of type 2 error.
The normal procedure in hypothesis testing is to calculate a quantity called a test statistics, whose value depends on the values that are observed in the sample. The test statistic is designed so that if the null hypothesis is true, then the test statistic value will be a random variable that comesfrom a known distribution, such as the standard normal distribution or a t distribution.
After the value of the test statistic has been calculated, that value is compared with the values that would be expected from the known distribution. If the observed test statistic value might plausibly have come from the indicated distribution, then the null hypothesis is accepted. However, if it is unlikely that the observed value could have resulted from that distribution, then the null hypothesis is rejected.
3.2 Theory
As discussed in paragraph 3, in science no theory can be definitively defined as true, consequently no law can stated as a scientific theory, can be defined as true when it is nothing else than a commandment, a rule governing practice, an enforcement of law, or an imposition of power. In these cases a law moves from axioms, and is stated as a theorem.
3.3 Theorem
A theorem is a statement that has been proved, such as the Pythagorean Theorem.
Any theory is a theorem when its assumptions come from axioms.
An axiom is a statement that is assumed to be true without proof. Axiom is a synonym for postulate. For example, the statement “Two distinct points are contained by one and only one line” is a postulate of Euclidean geometry.
Any law is a theorem when its assumptions come from axioms such as (i) commandments, (ii) forces of law and order, (iii) imposition of power. Those statements are axioms because:
(i) Any commandment is an order coming from God.
(ii) Forces of law and order are orders coming from a sovereign government.
(iii) Any imposition of power is an order coming from a stronger party to a weaker party.
3.4 Tautology
A tautology is a sentence that is necessarily true because of its logical structure, regardless of the facts. For example, the sentence “The Earth is flat or else it is not flat” is a tautology.
A tautology does not give you any information about the world, but studying the logical structure of tautologies is interesting.
For example, let r represent the sentence
(p AND q) OR [(NOT p) OR (NOT q)]
The following truth table shows that the sentence r is a tautology:
p q P AND q NOT p NOT q (NOT p)OR (NOT q) r T T T F F F T T F F F T T T F T F T F F T F F F T T T T
All of the values in the last column are true. Therefore, r will necessarily be true, whether or not p or q is true. In words, sentence r says: “Either p and q are both true, or else at least one of them is not true.”
The negation of a tautology is necessarily false: it is called a contradiction.
4. Common Law, Civil Law, and Positive Law
The General Theory of Law distinguishes the Bodies of Law in three basic systems: (i) the Common Law; (ii) the Civil Law; (iii) the positive Law:.
In this work we will try to analyze those systems in the most sterile way, consulting the most current definitions.
4.1 The Common Law
Following the definition given by Prof. Ivamy, Common Law is defined as “The ancient unwritten law of this kingdom.”
The term is used in various senses: –
1. Of the ancient law above mentioned embodied in judicial decisions as opposed to statute law enacted by Parliament.
2. Of the original and proper law of England, formerly administered in the Common Law Courts, i.e. the superior courts of Westminster, and the Nisi Prius Courts, as opposed to the system called Equity, which was administered in the Court of Chancery. Since the Judicature Act 1873 all courts administer law and equity concurrently (see now the Supreme Court Act 1981, s. 49).
3. Of the municipal law of England as opposed to the Roman Civil Law, or other foreign law.
4.2 The Civil Law
Civil Law is defined in Justinian’a Institutes as ‘that law which every people has established for itself; in other words, the law of any given State. But this law is now distinguished by the term municipal law, the term civil law being applied to the Roman civil law (See Corpus Juris Civilis).
Corpus Juris Civilis is the body of Roman Law, published in the time of Justinian, containing:
1. The Institutions or Elements of Roman Law, in five books.
2. The Digest or Pandect, in fifty books, containing the opinions and writings of eminent lawyers.
3. A new Code or collection of Imperial Constitutions, in twelve books.
4. The Novels, or new Constitutions, later in time than the other books, ad amounting to a supplement to the Code.
4.3 The Positive Law
Positive Law is properly synonymous with law properly so called. For every law is put or set by its author. But in practice, the expression is confined to laws set by a Sovereign to a person in a state ob subjection to their author; i.e. to laws enacted by sovereign States, or by their authority, disobedience to which is malum prohibitum.
Chapter II
Definition of Forecasting
2.1 Definition of Forecasting
We can define forecasting as the activity of predicting a future event in quality and quantity. Therefore, we can forecast whether, as well as sales in a company (in quality and quantity).
Certainly, if it is not too difficult to forecast whether in quality (sunny or rainy), it can be practically impossible to forecast the quantity of sun or rain we will have. Nevertheless, as for sales forecast in a company, we can reach a good approximation, and overall we can realize whether the quantity of the phenomenon is enough for our needs.
Therefore, forecasting can be both a qualitative and quantitative study.
Forecasting can be done by means of models based on the past trend of the variable(s) (self-regressive models,) or on the future trend of other variable(s) (regressive models) from which the variable depends.
Models of the former type can be applied when the phenomenon shows an autonomous trend; models of the latter type can be applied when the phenomenon is not autonomous, but dependent on other phenomena.
2.2 Forecasting in Determinism
Determinism is a doctrine, which states that acts of the will, natural events, or social changes are determined by preceding causes. Such a doctrine uses mathematics to find out the limits, dimensions, derivatives, and scopes of any function (e.g., determine a position at sea).
For instance, in neoclassical economic theory, rationality is the maximization of one’s rewards.
From one point of view this is a problem in mathematics: choose the activity that maximizes rewards in given circumstances. In neoclassical economics, any determination can be asserted only under certain conditions, i.e. the ceteris paribus (other things equal) condition. That means: if nothings else changes, then the variables of the function work under the stated condition. But, if the system is sensitive to the initial condition, in other words, if you don’t know exactly in detail every little piece of information, then you have a potentially chaotic system.
In economics we are used to forecast demand function depending on the three basic variables:
(i) need; (ii) spendable income; (iii) tastes. Now, such a system is highly unstable because taste is a variable of a function which depends on selling price in both ascending and descending condition. In other words taste can increase with price increase, and can decrease with price decrease, which thing is in contrast with any economic concept of rationality. In such case any forecast cannot be based on personal assumption or rationality, because uncertainty does not allow identifying rationality with the best reward. Therefore, in such a case determinism is not applicable, and we need to recur to the Game Theory assessment.
2.3 Forecasting in Game Theory
In game theory, the case is more complex, since the outcome depends not only on our own strategies (or the “market conditions,”) but also directly on the strategies chosen by others. Nevertheless we may still think of the rational choice of strategies as a mathematical problem – maximize the rewards of a group of interacting decision makers – and so we again speak of the rational outcome as the “solution” to the game.
Furthermore, the weakness of determinism probably lies in the lack of forecasting the ‘butterfly effect’ in technological changes. That is, there is no way of understanding new logics by means of old logics. In fact, determinism can understand and predict economic behaviour only under well known elements of a function.
Game Theory can be used for better understanding economics under uncertainty.
The Prisoners’ Dilemma has clearly shown as individually rational actions result in both persons being made worse off in terms of their own self-interested purposes. This remarkable result is what has made the wide impact in modern social science, for there are many interactions in the modern world that seem very much like that, from arms races through road congestion and pollution to the depletion of fisheries, the overexploitation of some subsurface water resources, and more recently, migration. These are all quite different interactions in detail, but are interactions in which (we suppose) individually rational action leads to inferior results for each person, and the Prisoners’ Dilemma suggests something of what is going on in each of them.
Therefore, as far as forecasting in Law is concerned, we strongly believe that the Game Theory provides a promising approach to understanding strategic problems of all sorts, and the simplicity and power of the Prisoners’ Dilemma and similar examples make them a natural starting point.
From the Game Theory we can also derive the analysis of zero-sum games, and non-zero sum games.
A game is zero-sum when the worth of the winner is tantamount to the worth of the looser. In such a game there is only a wealth transfer from one player (the looser) to another (the winner).
A lottery is a practical example of a zero-sum gave; in fact, the winner(s) wins the amount lost by the looser(s). Speculation in a Stock Exchange is a zero-sum game.
In a nonzero-sum game all players have a benefit (even or uneven) from the game. A business contract is a non-zero sum game.
Business is commonly based on non-zero sum games, for it works through the logic of Cost/Benefit Analysis. Politics is commonly based on zero-sum games, for it works on power acquisition: to get more power you need to subtract it to someone else.
2.4 Forecasting in Chaos Theory, and Non-linear Dinamics
Chaos is the doctrine that studies systems having the property that a small change in the initial conditions can lead to very large changes in the subsequent evolution of the system. Chaotic systems are inherently unpredictable. The weather is an example; small changes in the temperature and pressure over the ocean can lead to large variations in the future development of a storm system. However, chaotic systems can exhibit certain kinds of regularities.
The Father of Chaos theory, Henri Poincare (1854-1912) in 1908 published Science et Methode that contained one sentence concerning the idea of chance being the determining factor in dynamic systems because of some factor in the beginning that we didn’t know about. All three of these men and their ideas went unnoted because quantum mechanics had disrupted the whole physics world of ideas; and because there were no tools such as ergodic theorems about the mathematics of measure; and because there were no computers to simulate what these theorems prove.
In 1846, the planet Neptune was discovered, causing quite a celebration in the classical Newtonian mechanical world, this revelation had been predicted from the observation of small deviations in the orbit of Uranus. Something unexpected happened in 1889, though, when King Oscar II of Norway offered a prize for the solution to the problem of whether the solar system was stable.
Henri Poincaré submitted his solution and won the prize, but a colleague happened to discover an error in the calculations. Poincaré was given six months to rectify the matter in order to keep his prize. In consternation, Poincaré found there was no solution. Poincaré had found results that upset the accepted view of a purely deterministic universe that had reigned since Sir Isaac Newton lined out linear mathematics. In his 1890 paper, he showed that Newton’s laws did not provide a solution to the “three-body problem”, in other words, how one deals with predictions about the earth, moon and sun. He had found that small differences in the initial conditions produce very great ones in the final phenomena, and the situation defied prediction. Poincaré’s discoveries were dismissed in lieu of Newton’s linear model; one was to just ignore the small changes that cropped up. The three-body problem was what Poincare had to interpret with a two-body system of mathematics. Why was it a problem? He was trying to discover order in a system where none could be discerned.
Poincaré’s negative answer caused positive consequences in the creation of chaos theory. About eighty years later, as early as 1963, Edward Lorenz, using Poincaré’s mathematics, described a simple mathematical model of a weather system that was made up of three linked nonlinear differential equations that showed rates of change in temperature and wind speed. Some surprising results showed complex behaviour from supposedly simple equations; also, the behaviour of the system of equations was sensitively dependent on the initial conditions of the mathematical model. He spelled out the implications of his discovery, saying it implied that if there were any errors in observing the initial state of the system and this is inevitable in any real system, prediction as to a future state of the system was impossible. Lorenz labelled these systems that exhibited sensitive dependence on initial conditions as having the “butterfly effect”: this unique name came from the proposition that a butterfly flapping its wings in Hong Kong can affect the course of a tornado in Texas.
During 1970-71, interest in turbulence, strange attractors and sensitive dependence on initial conditions arose in the world of physics. E. N. Lorenz published a paper, called “Deterministic nonperiodic flow” in 1963 that proved that meteorologists could not predict the weather. Jim Yorke, an applied mathematician from the University of Maryland was the first to use the name Chaos, but actually it was not even a chaos situation, but the name caught on.
A chaotic system is sensitive to initial conditions and causes the system to become unstable.
Cambel identifies chaos as inherent in both the complexity in nature and the complexity in knowledge. The nature side of chaos entails all the physical sciences. The knowledge side of chaos deals with the human sciences. Chaos may manifest itself in either form or function or in both. Chaos studies the interdependence of things in a far-from-equilibrium state. Every open nonlinear dissipative system has some relationship to another open system and their operations will intersect, overlap and converge. If the systems are sensitive to the initial conditions, in other words, you don’t know exactly in detail every little piece of information, and then you have a potentially chaotic system.
Not all systems will be chaotic, but those where a lack of infinite detail is unknown, then these systems have an indeterminate quality about them. You can’t tell what’s going to happen next. They are unpredictable. If these systems are perturbed either internally or externally, they will display chaotic behaviour and this behaviour will be amplified microscopically and macroscopically.
In chaos theory it means: When a complex dynamical chaotic system becomes unstable in its environment because of perturbations, disturbances or ‘stress’, an attractor draws the trajectories of the stress, and at the point of phase transition, the system bifurcates and it is propelled either to a new order through self-organization or to disintegration.
CHAPTER III
Forecasting in Law
3.1 Determinism in Law
In Paragraph 2.2 we have described the fault of economic determinism to correctly forecast demand function because of a lack of information about ‘tastes’, which change continuously with the change of price.
In law we have the same impasse in forecasting and assessing human behaviour, both of non-aggregated and socialized individuals.
In law we can determine the value of an individual or social variable only under the ceteris paribus condition, i.e. only if all other variables do not change.
Economics has clearly justified as such condition cannot work in demand function. Therefore we have to conclude that it cannot work either in social or individual function.
Human behaviour has an infinite number of variables, which can arise in linear or non linear function with a huge number of other variables.
If we can imagine to forecast the function of two or more distinct variables under the ceteris paribus condition, certainly we cannot imagine to trace or assess a function (linear or non-liner) for describing dynamics in human behaviour.
3.2 Uncertainty in Law
The simple example of the Prisoner’s Dilemma clearly shows as the personal rationality can lead to a reward that is not the best.
In order to support the choice, the Game Theory states that the condition of collusion, i.e. the co-operation between the opposite parties, is the only possibility to get the best reward for both parties.
Furthermore, this theory distinguishes the behaviour (Game) of the parties into zero-sum and non-zero-sum games.
A game is zero-sum when the winner’s pay-off is equal to the loser’s pay-out. Inside this game there is no destruction and no generation of wealth; wealth is just transferred.
In a non-zero sum game both parties have a pay-off, even if in different quotas. This kind of game always generates new wealth.
Law cannot determine or forecast any future behaviour if there is no collusion amongst the parties, and the nature of the game (zero-sum/non-zero-sum) is not well understood.
Anyway, if a law can set-up the rules of a zero-sum game, certainly cannot set-up the rules of a non-zero-sum game, because in this case the generation of new wealth (the profit of all parties) is not deterministic, and could be completely intangible or immaterial.
3.3 Non-linear Dinamics in Law
Human societies can be defined as non-linear dynamical systems, for human needs are unlimited in quality, and sensible to initial condition because of instincts, education, and culture.
Further research in non-linear dynamical systems that displayed a sensitive dependence on initial conditions came from Ilya Prigogine, a Nobel-prize winning chemist, who first began work with far-from-equilibrium systems in thermodynamic research. Ilya Prigogines’ research in non-linear dissipative structures led to the concept of equilibrium and far-from equilibrium to categorize the state of a system.
In the physical studies of thermodynamics, Prigogines’ research revealed far-from-equilibrium conditions that led to systemic behaviour different from what was expected by the customary interpretation of the Second Law of Thermodynamics. Phenomena of bifurcation and self-organization emerged from systems in equilibrium if there was disruption or interference. This disruption or interference became the next step to Chaos Theory; it became Chaos/Complexity Theory. Prigogine talked about his theory as if he were Aristotle: a far-from-equilibrium system can go ‘from being to becoming’. These ‘becoming’ phenomena showed order coming out of chaos in heat systems, chemical systems, and living systems.
From Lorenz simulation, René Thom, mathematician, proposed ‘catastrophe theory’, or a mathematical description of how a chaos system bifurcates or branches. Out of these bifurcations came pattern, coherence, stable dynamic structures, networks, coupling, synchronization and synergy. From the study of complex adaptive systems used by Poincaré, Lorenz and Prigogine, Norman Packard and Chris Langton developed theories about the “edge of chaos” in their research with cellular automata. The energy flowing through the system, and the fluctuations, cause endless change which may either dampen or amplify the effects. In a phase transition of chaotic flux, (when a system changes from one state to another), it may completely reorganize the whole system in an unpredictable manner.
Two scientists, physicist Mitchell Feigenbaum and computer scientist Oscar Lanford came up with a picture of chaos in hydrodynamics using Renormalization ideas. They were studying non-linear systems and their transformations. Since then, chaos theory or Nonlinear Science has taken the scientific world by a storm, with papers coming in from all fields of science and the humanities. Strange attractors were showing up in biology, statistics, psychology, and economics and in every field of endeavor.
Properties of complexity
Complexity or the edge of chaos yielded self-organizing, self-maintaining dynamic structure that occurred spontaneously in a far-from-equilibrium system. Complexity had no agreed upon definition, but it could manifest itself in our everyday lives. Intense work is being done on the implications of complexity at the Santa Fe Institute in New Mexico. Here Ph. D.’s from many fields use cross-disciplinary methods to show how complexity in one area might link to another.
Erwin Laszlo, from the Vienna International Academy, has the most interesting statement about Complexity:
In fact, of all the terms that form the lingua franca of chaos theory and the general theory of systems, bifurcation may turn out to be the most important, first because it aptly describes the single most important kind of experience shared by nearly all people in today’s world, and second because it accurately describes the single most decisive event shaping the future of contemporary societies.
Bifurcation once meant splitting into two or more forks. In chaos theory it means: When a complex dynamical chaotic system becomes unstable in its environment because of perturbations, disturbances or ‘stress’, an attractor draws the trajectories of the stress, and at the point of phase transition, the system bifurcates and it is propelled either to a new order through self-organization or to disintegration.
The phase transition of a system at the edge of chaos began with the studies of John Von Neumann and Steve Wolfram in their research on cellular automata. Their research revealed the edge of chaos was the place where the parallel processing of the whole system was maximized. The system performed at its greatest potential and was able to carry out the most complex computations. At the bifurcation stage, the system was in a virtual area where choices are made–the system could choose whatever attractor was most compelling, could jump from one attractor to another–but it was here at this stage that forward futuristic choices were made: this was deep chaos. The system self-organized itself to a higher level of complexity or it disintegrated. The phase transition stage may be called the transeunt stage, the place where transitory events happen. Transeunt is a philosophical term meaning that there is an effect on the system as a whole produced from the inside of the system having a transitory effect; and, a scientific term in that it is a nonperiodic signal of sudden pulse or impulse.
After the bifurcation, the system may settle into a new dynamic regime of a set of more complex and chaotic attractors, thus becoming an even more complex system that it was initially. Three kinds of bifurcations happen:
1. Subtle, the transition is smooth.
2. Catastrophic, the transition is abrupt and the result of excessive perturbation.
3. Explosive, the transition is sudden and has discontinuous factors that wrench the system out of one order and into another. Per Bak with his co-researchers Chao Tang and Kurt Wiesenfeld reckons nature abiding on the edge of chaos or what they call ‘self-organized criticality’.
Our daily encounter with Chaos/Complexity is seen in traffic flow, weather changes, population dynamics, organizational behaviour, shifts in public opinion, urban development and decay, cardiological arrhythmias, epidemics. It might be found in the operation of the communications and computer technologies on which we rely, the combustion processes in our automobiles, cell differentiation, immunology, decision making, the fracture structures, and turbulence.
Here are a few of the statements that Cambel makes about the ubiquity of chaos:
1. Complexity can occur in natural and man-made systems, as well as in social structures and human beings.
2. Complex dynamical systems may be very large or very small, indeed, in some complex systems, large and small components live cooperatively.
3. The system is neither completely deterministic nor completely random, and exhibits both characteristics.
4. The causes and effects of the events that the system experiences are not proportional.
5. The different parts of complex systems are linked and affect one another in a synergistic manner.
6. There is positive and negative feedback. The level of complexity depends on the character of the system, its environment, and the nature of the interactions between them.
3.4 Relations of Order and Equivalence in a Body of Law
A – Relation
A relation is a set of ordered pairs. The first entry in the ordered pair can be called x, and the second entry can be called y.
For example, {(1, 0), (1, 1), (1, -1), (-1, 0) is an example of a relation.
A function is also an example of a relation. A function has the special property that, for each value of x, there is a unique value of y. This property does not have to hold true for a relation. The equation of a circle x²+y²=r² defines a relation between x and y, but this relation is not a function because for every value of x there are two values of y : √r²-x² and -√ r²-x².
In mathematics an order is a set on which are defined relations of order (which produce a system), or relations of equivalence (which produce an arrangement).
B – Order
“To put in order” certainly is a work that allows to organize anyway, and in a certain sense, a set of objects, and/or concepts.
Mathematics identifies in an “order” all those properties that characterize all relations that can produce the order itself: i.e., all those properties that any relation must possess in order that the connections of the elements of the set, which is generated, constitute an order of the set itself.
The technical definition of this kind of relation is “relation of order”.
C – Equivalence
Two logic sentences are equivalent if they will always have the same truth value. For example, the sentence “p→q” (“IF p THEN q”) is equivalent to the sentence “(NOT q) → (NOT p).”
Considerations
If we lived in a completely deterministic world there would be no surprises and no decision making because an event would be caused by certain conditions that could lead to no other outcome. Nor could we consider living in a completely random world for there would be, as Cambel says, “no rational way of reaching a well-reasoned decision”. What kind of answers do we get when we recognize that a system is indeed unstable and that it is indeed an example of chaos at work?
The American Association for the Advancement of Science published nineteen papers presented at their 1989 meeting that was devoted entirely to chaos theory usage on such ideas as chaos in dynamical systems, biological systems, turbulence, quantized systems, global affairs, economics, the arms race, and celestial systems. Stambler reported that the Electric Power Research Institute was considering the applications of chaos control in voltage collapses, electromechanical oscillations, and unpredictable behaviour in electric grids. Peng, Petrov and Showalter were studying the usefulness of chaos control in chemical processing and combustion. Ott, Grebogi, and Yorke cited the many purposes of chaos and said it might even be necessary in higher life forms for brain functioning. Freeman studied just such brain functions related to the olfactory system and concluded that indeed chaos “affords an opportunity to exploit further these manifestations of brain activities”.
Not only are research papers prolific, but an array of books are being published monthly on chaos applications. Bergé, Pomeau, and Vidal assert that chaos theory has “great predictive power” that allows an understanding of the overall behaviour of a system. Kauffman uses the self-organization end of chaos to assert that nature itself is spontaneous; Cramer claimed that by overcoming the objections to mysticism and scientism that the “theory of fundamental complexity is valid” (this will most likely turn into a book–so many researchers refer to it). This perhaps gives some idea as to far reaching applications of chaos theory in the scientific areas.
A few last words about the edge of chaos will be added here because they will allow you to see how research has gone from linear science to nonlinear applications. Wentworth d’Arcy Thompson, in his book On Growth and Form used transformations of coordinates to compare species of animals. Comparing one form of a fish, as an example, with another could be shown on a coordinate map and used to show how they differ and how they were alike. The same kind of transformation coordinate map could compare chimpanzee skulls to human skulls. Where Thompson used order to compare the workings of nature, Stuart Kauffman, in his book The Origins of Order: Self-Organization and Selection in Evolution took the next step in studying nature. He was seeking the origins of order in complex systems that were chaotic. His research is rife with examples of the interconnectedness of selection and self-organization. The essence of his findings are that much of the order seen in organisms stems from spontaneous generation from systems operating at the edge of chaos, or in other words, systems that are unstable purposely. Thompson applied physics to biology, and now Kauffman is applying chaos /complexity theory to biology. Cramer sees the interaction of order and disorder as a necessity in nature. “In nature, then, forms are not independent and arbitrary, they are interrelated in a regular way…And even organs arising to serve new functions develop according to the principle of transformation. At the branch points where something new emerges, disruptions of order are in fact necessary; abrupt phase changes occur. Indeed, the interplay of order and chaos constitutes the creative potential of nature.”
The great French mathematician Henri Poincaré first noticed the idea that many simple nonlinear deterministic systems can behave in an apparently unpredictable and chaotic manner. Other early pioneering work in the field of chaotic dynamics were found in the mathematical literature by such luminaries as Birkhoff, Cartwright, Littlewood, Levinson, Smale, and Kolmogorov and his students, among others. In spite of this, the importance of chaos was not fully appreciated until the widespread availability of digital computers for numerical simulations and the demonstration of chaos in various physical systems. This realization has had broad implications for many fields of science and only in the past decade or so the field has undergone explosive growth. The ideas of chaos have been very fruitful in such diverse disciplines as biology, economics, chemistry, engineering, fluid mechanics, physics, just to name a few. As you can see, the Chaos Complexity Theory can become a real research tool in many fields. Metaphorically it can be used outside the scientific field. This author plans to apply this theory to religious research.
Formulata in origine (1856) dal fisicoR. Clausius, la legge dell’entropia è stata introdotta nella teoria economica da N. Georgescu-Roegen per sottolineare che i processi economici non sono “circolari” ma irreversibili, e che lo stock di risorse utilizzabili tende a esaurirsi (nozioni riprese da alcune correnti ecologiste).
Nella teoria dell’informazione per entropia si intende la “quantità” di informazione (quest’ultima è minor e quanto più alta è l’entropia).
Nicholas Georgescu-Roegen (Costanza 1906) economista romeno Formatosi nella scuola di Schumpeter, ha applicato le leggi della termodinamica all’economia Entropia e processo economico (1971).
Chapter IV
Second Conclusion: Law as Scientific Theory
4.1 Commandments as Axioms
In Mathematics and Logic a theorem is every proposition for which a proof, which moves from a group of axioms, exists.
In such case, we can talk of a notion related to a well determined axiomatic system. If such a system is a formal system in the usual Logic of first order, then the syntax above enunciated coincides with the semantic syntax of the theorem as a logic consequence of the set of axioms.
Therefore, any Religion, intended as a set of laws stated as Commandments, is a theorem.
Every body of law based on religious prescriptions is a theorem in mathematics and logic, such as the following major theorems: Fundamental Theorem of Prime Numbers; Fundamental Theorem of Algebra; Fundamental Theorem of Elementary Arithmetic; Fundamental Theorem of Calculus; etc.
Therefore, any religion is a body of law.
The major differences between religion and a body of law are “universality” and “free will” of the former, and the “territoriality” and “imposition” of the latter.
The universality of any religion means that everyone everywhere is subject to the commandments of his/her religion. The free will of any believer consists in the possibility of accepting or not the musts of the commandments.
The territoriality of a body of law indicates that the sovereignty of the Institution is encompassed in a certain territory, while the imposition indicates that no free will opportunity is given to everybody.
The concept of “State” has to be considered as an axiom too, because no theory about any State has been proved, but anyway, we use the concept as a theorem, and we assume that no society can exist with no State.
4.2 Customs as Axioms
Customs have to be considered as axioms, because of their acceptance in practice.
The Law Merchant (Lex Mercatoria) is the most commonly know body of law based on customary axioms.
4.3 Principles Governing Practice as Axioms or Power Imposition
Principles governing practice (as in a profession of arts, or in an administrative regulation) can be defined as axioms when they are universally accepted. In any contrary case they can be considered as tautology or a power imposition.
4.3 Law as Tautology
A large number of Constitutions are tautologies. Even the body of law (the Constitution) of the European Union is a tautology, because his pre-eminence on Member States bodies of law is given by the logic of the connectives used in his rules and judgements, more than on a real will of the European population to generate such a pre-eminence.
4.4 Law as Innovation and Engineering of Natural Processes
CHAPTER V
First Conclusion: Law is Power Imposition
4.1 Law as Commandments of a Revealed Knowledge
4.2 Law as Principles Governing Practice
4.3 Law as Power Imposition
5.5 – Introduction to Nomology
Abstract
Nomology is the study of human lawmaking (theorisation), which controls and verifies the correspondence of human behaviour to correct theories.
In other words, the statement of true premises, and the application of valid arguments bring to conclusion that must be true.
With such a conclusion we do not want to side neither with Natural Law, nor Positive Law, nor with the polyvalent Logic of Karl Popper. We just want to assume that any human law can intervene if, and only if, it is sure to produce a benefit or an improvement to the natural order; in any contrary case no law (control brought about by enforcing rules) is needed, or can be admitted.
In this work we have tried to investigate the process of codification referred to the major aspects of nature, science, religion, societies, and other acts or facts token into consideration by a body of law.
– Definitions
Nomology can be defined as the process, which deals with the study of theories and laws. The word nomology is a neologism formed by two Greek terms: nomos, which indicates a theory, a law, the government, or the administration of something, and logos, which indicates ‘the study of’.
– Conclusion
In the Abstract we have defined Nomology as the study of human lawmaking (theorisation) that controls and verifies the correspondence of human laws to a correct theory, i.e. to the respect of the statement of true premises, and of a valid argument.)
Therefore, with a law we can state a relation of equivalence (natural law) when we describe a natural process, identifying a strict equivalence, or arrangement between the statement of the theory and the condition of reality. With a positive law usually we state a relation of order, that is, we try to organize with a certain sense a set of concepts, setting up an artificial order to the elements of the set. That is, with any positive law (relation of order) we tend to innovate to the natural order of a set, because we believe we can improve its efficiency.
Definitively, consciously or unconsciously, each time we issue a positive law we innovate reality, because we believe we can be more efficient than reality itself.
Chapter V
Behaviours “Secundum Legem; Contra Legem; Praeter Legem”
5 – Innovation and Law
In a state of nature human beings live through consistent patterns or “regularities” in the way ecosophic systems evolve over time. We can articulate these patterns in the form of theories, and sets, as follows:
5.1 – Theory of Completeness of Parts
Ecosophy arises as the result of the synthesis of previous separate matters (disciplines) into a single whole. In order to exist and be viable the system includes three basic sets:
Demand Set
Production Set
Non-Rational Set.
Each set is a closed set. If any of these sets is missing or inefficient, to that extent the ecosophic system is unable to survive and prevail against its competitive systems (i.e., those systems, which impose power, e.g., Political, Military, Monopolistic, and Violence System.)
5.2 – Theory of Entropy and Energy Conductivity
An Ecosophic System evolves in the direction of increasing efficiency in the transfer of energy from outside to inside. This transfer can take place through a condition or state that can be called entropy, as in Physics (it is the case of using the same term because both terms indicate the same phenomenon.) The higher is the entropy the higher conductivity. Therefore, the higher is conductivity, the lower enthalpy.
Entropy can be argued as the thermodynamic quantity that characterizes the trend of closed systems (i.e., those systems, which do not exchange matter or energy with surrounding environment) to evolve to the maximum equilibrium. Entropy is the quantity that signifies the non-reversibility of natural phenomena, as it is the index of energy degradation. Energy and matter degrades while entropy increases, thus resulting inapplicable.
N. Georgescu-Roegen firstly used the theory of entropy in Economics, in order to emphasize as economic processes are not “circular”, and non-reversible, and that the stock of natural resources tends to exhaust itself (this theory is also used by major ecologists.)
In Information Theory, people use the term entropy as the “quantity” of information (the higher entropy, the lower information.)
In Physics any entropy increase indicates the system’s passage to a state of greater disorder.
Imagine, for example, the passage of water from solid state to liquid state: in solid state molecules are tied each other in the ice crystal lattice, (thus easier to be identified in any fixed position), whilst in liquid state molecules, subject to weaker cohesion forces, are stimulated by a less thermal motion, that is, they are more irregular. In order to transit from solid to liquid state the system has to absorb heath (energy, enthalpy) at constant temperature, therefore its entropy variation shall be positive, i.e., entropy increases in correspondence of the passage to a phase characterized by a greater disorder.
We apply to entropy as the natural chaos, the microscopic disorder of a system, which allows enthalpy (Information, Culture, etc.,) to be acknowledged by, and transferred to ecosophic system (i.e., we can argue that entropy and enthalpy are in a reverse function compared to that given by Information Theory.)
This transfer can take place through a more or less state of entropy, and the entropy level will be the standard of transfer efficiency. In a very personal and subjective scale of entropy, we consider the U.S. as the highest entropic system, and Australia as the lowest.
5.3 – Theory of Ideal Efficiency
An Ecosophic System evolves in such a direction as to increase its degree of efficiency. Efficiency is defined as the quotient of the sum of the system’s benefits, Bi, divided by the sum of its cost effects Cj.
ΣBi
Efficiency = E = ──
ΣCj.
Benefit effects include all the valuable result of the system’s functioning. Cost effects include either individual or system cost.
Taking this trend to its limit, we can assume the notion of Ideal Efficiency is obtained when the Bi are maximum and the Cj are minimum. The theory thus states that, as the system evolves, the sum of the Bi trend upwards and the sum of Cj trend downwards.
From Mechanics we can assume, as stated by Stan Kaplan “A technical system evolves in such a direction to increase its degree of ideality”.
Chaos Theory supports the aforementioned statement through the Theory of “Strange Attractors”.
From Economics we can assume the following theories:
Cost/Benefit Analysis
Profit Maximization
Scarcity (as a prerequisite to any economic behaviour)
The ratio Cost/Benefit indicates the efficiency of any business, i.e., any human action. In effect any benefit can be material or immaterial, real or presumed. Therefore, a benefit is a very personal appreciation, which can be related to tastes, ethics, religion, ideals, and/or any further material and/or immaterial aspect. Efficiency states that in every human action benefits must ever been greater than costs. In any contrary case we have to admit that our action is useless, and it cannot be communicated to, and/or exchanged with anybody else. Therefore, benefit is equal to utility.
Profit Maximization indicates the relationship we can trace between cost and benefit of any human exchange. That is, if we want to increase the efficiency of our action we need to increase benefits (if we can); otherwise we need to reduce costs.
In any business, producer can increase selling price, which is the benefit of his business (if he can), otherwise he has to reduce his production costs. In the other hand, consumers have to increase the benefits of the product (if they can), otherwise they need to reduce the cost (selling price) of the product. Therefore, in any exchange we can see a bargaining on the selling price, which is at the same time benefit for the producer and cost for the consumer. Only consumers really know the benefit they can get from any product, just because benefit is a very personal appreciation.
Scarcity is a prerequisite to any exchange, because in case of a free product there is no cost, which ting is contrary to any efficiency (Cost/Benefit ratio) analysis.
In effect, if we admit the possibility of satisfying a need for free, we must admit than somebody else has worked for free in order to produce the product that we have consumed. In this case we have abused of another person for satisfying our need, which aspect is not ethic at all.
5.4 – Theory of Harmonization of Rhythms
Dynamics can be visualized in term of geometric shapes called attractors. (If you start a dynamical system from some initial point and watch what it does in the long run, you often find that it ends up wandering around on some well-defined shape in phase shape.
A system that settles down to a steady state has an attractor that is just a point.
A system that settles down to repeating the same behaviour periodically has an attractor that is a closed loop.
That is, closed loops correspond to oscillators. The butterfly effect implies that the detailed motion on a strange attractor cannot be determined in advance. But this doesn’t alter the fact that it is an attractor.
In his 1935 article “Synchronous Flashing of Fireflies” in the journal Science the America biologist Hugh Smith provides a compelling description of the phenomenon:
Imagine a tree thirty-five to forty feet high, apparently with fireflies on every leaf, and all the fireflies flashing in perfect unison at the rate of about three times in two seconds, the tree being in complete darkness between flashes. Imagine a tenth of a mile of river front with an unbroken line of mangrove trees with fireflies on every leaf flashing in synchronism, the insect on the trees at the end of the line acting in perfect unison with those between. Then, if one’s imagination is sufficiently vivid, he may form some conception of this amazing spectacle.
Why do the flashes synchronize? Asks Ian Stewart
In 1990, Renato Mirollo and Steven Strogatz showed that synchrony is the rule for mathematical models in which every firefly interacts with every other. Again, the idea is to model the insects as a population of oscillators coupled together -this time by visual signals. The chemical cycle used by each firefly to create a flash of light is represented as an oscillator. The population of fireflies is represented by a network of such oscillators with fully symmetric coupling -that is, each oscillator affects all of the others in exactly the same manner. The most unusual feature of this model, which was introduced by the American biologist Charles Peskin in 1975, is that the oscillators are pulse-coupled. That is, an oscillator affects its neighbours only at the instant when it creates a flash of light.
The mathematical difficulty is to disentangle all these interactions, so that their combined effect stands out clearly.
Mirollo and Strogatz proved that no matter what the initial conditions are, eventually all the oscillators become synchronized. The proof is based on the idea of absorption, which happens when two oscillators with different phases “lock together” and thereafter stay in phase with each other. Because the coupling is fully symmetric, once a group of oscillators has locked together, it cannot unlock. A geometric and analytic proof shows that a sequence of these absorptions must occur, which eventually lock all the oscillators together.
The big message in both locomotion and synchronization is that nature’s rhythms are often linked to symmetry, and that the patterns that occur can be classified mathematically by invoking the general principles of symmetry breaking. The principles of symmetry breaking do not answer every question about the natural world, but they do provide a unifying framework, and often suggest interesting new questions. In particular, they both pose and answer the question: Why these pattern but not others?
The lesser message is that mathematics can illuminate many aspects of nature that we do normally think of as being mathematical. This is a message that goes back to the Scottish zoologist D’Arcy Thompson, whose classic but maverick book On Growth and Form set out in 1917 an enormous variety of more or less plausible evidence for the role of mathematics in the generation of biological form and behaviour. In an age when most biologists seem to think that the only interesting thing about an animal is its DNA sequence, it is a message that needs to be repeated, loudly and often.
6. – CONCLUSION: Politics as Positive Law maker vs. Business as Customary Law maker.
Commonly we understand politics in the following two main aspects:
a) Administration of Power
b) Administration of Res Publica.
Politics intended as “Administration of Power” can be classified as a zero-sum game in Game Theory, for it transfers power from someone to someone else, i.e. in order to be more powerful anyone needs to reduce the power (freedom) of anyone else.
Politics cannot generate additional new power, therefore every unit of power acquired from the winner tantamount to the same unit of power lost from the looser.
Politics intended as “Res Publica Administration”, to produce its effects is based on taxes, levies, and/or other impositions, for this it is a zero-sum game. No efficiency is required in such a game. No administrator is responsible for the administration of the “Res Publica.”
To be responsible and effective the administration of Res Publica has to be based on the creation of new wealth. Only in this case individuals can be convinced and involved in any common policy dictated by Politics.
For Business we intend any busy-ness, as opposite to busy-less,
Therefore, business is any activity based on the exchange between two or more parties.
Exchange exits, and is free, only when all parties involved in a deal agree to convene a certain reciprocal behaviour.
From this premise we can derive that, if the exchange is agreed, then each party has expressed his/her will.
Any exchange (deal) can be represented in a “must” that can be substantiated in three basic contents:
a) Punishment
b) Remuneration
c) Conditioning
“Punishment” is any deal under the terms: “Do that, otherwise you will be condemned, punished, forced, etc.”
Such a deal is a freedom’s restriction, which includes the concept that a looser and a winner exist.
Therefore, it can be represented as a zero-sum game. No punishment can be defined as business.
“Remuneration” is any deal stated under the terms: “If you do that, then you will get remuneration, a pay-off, a reward, etc.”
In such a deal all parties get advantages, even if they are different for each single party. Every party enters this agreement because they consider that the benefit they get from the agreement is greater than the cost they have to undertake. Therefore, in any case they have a profit.
Even if the agreement is strongly limitative of the freedom of someone, anyone gets, or suppose to get a profit from the deal, therefore we can configure this event as a non-zero-sum game.
“Conditioning” is any deal under the terms: “Do that, because I suggest you to do that, and you believe that it gives you advantage.”
Any conditioning statement does not need to be proved. Therefore, it can be classified as axiom.
Theorems work because people strongly believe they are true, consequently everybody act that way.
Exchange can be defined as “Correct Business” when all parties involved in a deal have the opportunity of entering other possibilities too, and when all parties have, more or less, the same strength in the deal.
Therefore, monopolies and monopsonies, oligopolies and oligopsonies, cannot be defined as correct businesses, because in these forms of market a weaker part always exits.
Monopolistic competition and pure competition are the only possibilities of correct business we can now imagine (even if the latter is an abstraction more than a real possibility).
In all forms of current governments, politics is not able to state any form of agreements such as business can do. Therefore, we need to conclude that politics is not the appropriate doctrine to solve problems in human societies.
Laws issued by Politics are purely impositions of power, i.e. zero-sum-games.
They do not target the “strange attractor(s)”; therefore they force more and more human behaviour in a useless attempt to reach false ideal efficiency.
Enrico Furia.
Dictionary of Mathematics Terms, Second Edition, Barron’s Educational Series Inc., 1987, New York
E.R. Hardy Ivamy, Mozley & Whitley’s Law Dictionary, Butterworths, London, 1988
for instance:
ˆ conjunction (AND)
ˇ disjunction (OR)
→ implication, as in a → b (IF a THEN b)
~p the negation of a proposition p
IFF, ↔ equivalence (IF AND ONLY IF),
Etc.
Following most current researches, even this theorem could be false.
E.R. Hardy Ivamy, ditto
E.R. Ivamy, ditto
E.R. Ivamy, ditto
Douglas Downing, Dictionary of Mathematics Terms, 2nd ed. Barron’s Ed. Series, New York, 1995.
Douglas Downing, Ditto
Ecosophy is intended as a global knowledge, which includes all aspect of life of living beings. For further description, see of the same author: “Introduction to the Ecosophic Set”; HYPERLINK “http://www.worldbusinesslaw.net” www.worldbusinesslaw.net
Nicholas Georgescu-Roegen, (Konstanz 1906), Rumanian economist, who first applied Thermodynamics laws to Economics. See Economics and Economic Process, (1971)
Stan Kaplan, An Introduction to TRIZ, Ideation International Inc., 1996
Ian Stewart, Nature’s Numbers, Basic Books, New York, 1995
#aneddotica magazine#business#enrico furia#ethics#finance#Italia#italians#italy#magazine#news#politics
0 notes
Text
The nature of consciousness, Thomas Metzinger & Sam Harris:
LIGHT ON DRUGS 24/11/2017
The nature of consciousness, Thomas Metzinger & Sam Harris:
What is it like to be a bat.
The ego tunnel -Thomas Metzinger
Nomologically- Under the laws of nature in our universe.
“There could be logically possible worlds, where the laws of nature do not hold, consciousness determined by the brain may only be possible in our world. No inner perspective to consciousness.” Thomas Metzinger.
Illusion- A sensory misrepresentation of something where an outer stimulus actually exists.
Hallucination- Something where there is no stimulus and you still get a misrepresentation.
The sense of selfhood is only partly a sensory experience.
Introspective self model- We have this robust misrepresentation of trans-temporal identity.
No one ever was or had a self.
Self model theory of subjectivity
Being no one (book)- “You have no self- but you have a self model active in your brain, a naturally evolved representational structure and that is transparent. Transparent means that you cannot experience it as a representation, that is right now, you are identifying yourself with the content of your own self model and you are completely glued into it, as an organism.
So what I’ve been interested in is this phenomenology of identification, how is the attachment created, the attachment to a thought…”
What is the most simple form of selfhood.- Bodily self identification. - What can dissolve this. The thinker of thoughts can go with meditation or hallucinogens, but the bodily sensation remains.
Because unless this goes, the sensory experience of the body, which is only part of the sense of selfhood, will still be attached to a selfhood. Even when dismissing dualism, the experience of the body is still attached to the introspective self model.
0 notes
Text
February 9, 1619 VANINI Lucilio Vanini who in his works styled himself as Julius Caesar Vanini was an Italian philosopher, physician and free-thinker who was one of the first significant representatives of intellectual libertinism. He was among the first modern thinkers who viewed the universe as an entity governed by natural laws (nomological determinism). He was also the first literate proponent of the thesis that humans evolved from apes. In 1616 Vanini completed the second of his two works "De Admirandis" and got it approved by 2 theologians at the Sorbonne. The work was published in September in Paris. It was dedicated to François de Bassompierre - a powerful man at the court of Marie de' Medici. The work was immediately successful among those aristocratic circles populated by young spirits who looked with interest to the cultural and scientific innovations that came from Italy. The "De Admirandis" became a kind of manifesto for cultural free spirits giving Vanini a chance to stay safe in circles close to the French court. However a few days after the publication of the work the two theologians at the Sorbonne who had expressed their approval were presented to the Faculty of Theology in formal session and the outcome was a de facto ban on the movement of the text. Now unwelcome in England, unable to return to Italy and threatened by some circles of French Catholics Vanini saw his room for manoeuvre shrinking and his chances of finding a stable place in French society failing. Fearing that a court case would be started against him he fled and went into hiding at Redon Abbey in Brittany where Abbott Arthur d'Épinay de Saint-Luc acted as his protector. But other factors gave cause for concern. In April 1617 Concino Concini (favorite of Marie de' Medici) was killed in Paris giving rise to a wave of hostility to Italian residents at court. In the following months a mysterious Italian named Pompeo Uciglio appeared in some cities of Guyenne, then the Languedoc and finally Toulouse. Duke Henri II de Montmorency (protector of esprits forts of the time) was the governor of this region and seemed to grant protection to the fugitive who still continued to keep carefully hidden. The presence of this mysterious character in Toulouse didn't however pass unnoticed and attracted the suspicions of the authorities. In August 1618 he was apprehended and interrogated. In February 1619 the Parliament of Toulouse found him guilty of atheism and blasphemy and in accordance with the regulations of the time his tongue was cut out, he was strangled and his body was burned. After the execution it emerged that the stranger was Vanini.
0 notes