
Ascriptions of mental states to oneself and others give rise to many interesting logical and semantic problems. This problem presents an original account of mental state ascriptions that are made using intensional transitive verbs such as ‘want’, ‘seek’, ‘imaginer’, and ‘worship’. This book offers a theory of how such verbs work that draws on ideas from natural language semantics, philosophy of language, and aesthetics.

Wittgenstein's philosophical career began in 1911 when he went to Cambridge to work with Russell. He compiled the Notes on Logic two years later as a kind of summary of the work he had done so far. Russell thought that they were ‘as good as anything that has ever been done in logic’, but he had Wittgenstein himself to explain them to him. Without the benefit of Wittgenstein's explanations, most later scholars have preferred to treat the Notes solely as an interpretative aid in understanding the Tractatus (which draws on them for material), rather than as a philosophical work in their own right. This book demonstrates the philosophical and historical importance of the Notes. By teasing out the meaning of key passages, it shows how many of the most important insights in the Tractatus they contain. It discusses in detail how Wittgenstein arrived at these insights by thinking through ideas he obtained from Russell and Frege. And it uses a blend of biography and philosophy to illuminate the methods Wittgenstein used in his work. The book features the complete text of the Notes in a critical edition, with a detailed discussion of the circumstances in which they were compiled.

This book presents a philosophical introduction to set theory. Anyone wishing to work on the logical foundations of mathematics must understand set theory, which lies at its heart. The book offers an account of cardinal and ordinal arithmetic, and the various axiom candidates. It discusses in detail the project of settheoretic reduction, which aims to interpret the rest of mathematics in terms of set theory. The key question here is how to deal with the paradoxes that bedevil set theory. The book offers a simple version of the most widely accepted response to the paradoxes, which classifies sets by means of a hierarchy of levels. The book interweaves a presentation of the technical material with a philosophical critique. The book does not merely expound the theory dogmatically but at every stage discusses in detail the reasons that can be offered for believing it to be true.

This book is a critical examination of the astonishing progress made in the philosophical study of the properties of the natural numbers from the 1880s to the 1930s. It reassesses the brilliant innovations of Frege, Russell, Wittgenstein, and others, which transformed philosophy as well as the understanding of mathematics. The book argues that through the problem of arithmetic participates in the larger puzzle of the relationship between thought, language, experience, and the world, we can distinguish accounts that look to each of these to supply the content we require: those that involve the structure of our experience of the world; those that explicitly involve our grasp of a ‘third realm’ of abstract objects distinct from the concrete objects of the empirical world and the ideas of the author's private Gedankenwelt; those that appeal to something nonphysical that is nevertheless an aspect of reality in harmony with which the physical aspect of the world is configured; and finally those that involve only our grasp of language.

This book argues that an adequate account of vagueness must involve degrees of truth. The basic idea of degrees of truth is that while some sentences are true and some are false, others possess intermediate truth values: they are truer than the false sentences, but not as true as the true ones. This idea is immediately appealing in the context of vagueness — yet it has fallen on hard times in the philosophical literature, with existing degreetheoretic treatments of vagueness facing apparently insuperable objections. The book seeks to turn the tide in favour of a degreetheoretic treatment of vagueness, by motivating and defending the basic idea that truth can come in degrees, by arguing that no theory of vagueness that does not countenance degrees of truth can be correct, and by developing a new degreetheoretic treatment of vagueness — fuzzy plurivaluationism — that solves the problems plaguing earlier degree theories.

This essay is concerned with two central areas of metaphysics: modality—the theory of necessity, possibility and other related notions; and ontology—the general study of what kinds of entities there are. Its overarching purpose is to develop and defend two quite general theses—that questions about what kinds of things there are cannot be properly understood or adequately answered without recourse to considerations about possibility and necessity, and that, conversely, questions about the nature and basis of necessity and possibility cannot be satisfactorily tackled without drawing on what might be called the methodology of ontology—specifically, on ideas about what is required for the existence of entities of various kinds. Taken together, these two theses claim that ontology and modality are mutually dependent upon one another, neither more fundamental than the other. Claims about what kinds of things there are require distinctions among different types of thing, such as objects, properties, relations, etc. The essay defends a broadly Fregean approach, according to which such ontological distinctions are to be drawn on the basis of prior distinctions between different logical types of expression. The claim that facts about what kinds of things exist depend upon facts about what is possible makes little sense unless one accepts that at least some modal facts are fundamental, and not reducible to facts of some other, nonmodal, sort. It is argued that facts about what is absolutely necessary or possible have this character, and that they have their source or basis, not in meanings or concepts nor in facts about alternative ‘worlds’, but in the natures or essences of things.

The term “fuzzy logic” (FL) is a generic one, which stands for a broad variety of logical systems. Their common ground is the rejection of the most fundamental principle of classical logic—the principle of bivalence—according to which each declarative sentence has exactly two possible truth values—true and false. Each logical system subsumed under FL allows for additional, intermediary truth values, which are interpreted as degrees of truth. These systems are distinguished from one another by the set of truth degrees employed, its algebraic structure, truth functions chosen for logical connectives, and other properties. The book examines from the historical perspective two areas of research on fuzzy logic known as fuzzy logic in the narrow sense (FLN) and fuzzy logic in the broad sense (FLB), which have distinct research agendas. The agenda of FLN is the development of propositional, predicate, and other fuzzy logic calculi. The agenda of FLB is to emulate commonsense human reasoning in natural language and other unique capabilities of human beings. In addition to FL, the book also examines mathematics based on FL. One chapter in the book is devoted to overviewing successful applications of FL and the associated mathematics in various areas of human affairs. The principal aim of the book is to assess the significance of FL and especially its significance for mathematics. For this purpose, the notions of paradigms and paradigm shifts in science, mathematics, and other areas are introduced and employed as useful metaphors.

In recent years there have been a number of books—both anthologies and monographs—that have focused on the liar paradox and, more generally, on the semantic paradoxes, either offering proposed treatments to those paradoxes or critically evaluating ones that occupy logical space. At the same time, there are a number of people who do great work in philosophy, who have various semantic, logical, metaphysical, and/or epistemological commitments that suggest that they should say something about the liar paradox, yet who have said very little, if anything, about that paradox or about the extant projects involving it. The purpose of this volume is to afford those philosophers the opportunity to address what might be described as reflections on the Liar.

This book brings together the outcome of ten years of research. It is based on a simple project, which was begun towards the end of the 1990s: information is a crucial concept, which deserves a thorough philosophical investigation. So the book lays down the conceptual foundations of a new area of research: the philosophy of information. It does so systematically, by pursuing three goals. The first is metatheoretical. The book describes what the philosophy of information is, its problems, and its method of levels of abstraction. These are the topics of the first part, which comprises chapters one, two and three. The second goal is introductory. In chapters four and five, the book explores the complex and diverse nature of several informational concepts and phenomena. The third goal is constructive. In the remaining ten chapters, the book answers some classic philosophical questions in informationtheoretical terms. As a result, the book provides the first, unified and coherent research programme for the philosophy of information, understood as a new, independent area of research, concerned with (1) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilization, and sciences; and (2) the elaboration and application of informationtheoretic and computational methodologies to philosophical problems.

Bayes' theorem is a tool for assessing how probable evidence makes some hypothesis. The papers in this book consider the worth and applicability of the theorem. The book sets out the philosophical issues: Elliott Sober argues that there are other criteria for assessing hypotheses; Colin Howson, Philip Dawid, and John Earman consider how the theorem can be used in statistical science, in weighing evidence in criminal trials, and in assessing evidence for the occurrence of miracles; and David Miller argues for the worth of the probability calculus as a tool for measuring propensities in nature rather than the strength of evidence. The book ends with the original paper containing the theorem, presented to the Royal Society in 1763.

The philosophy of mathematics articulated and defended in this book goes by the name of “structuralism”, and its slogan is that mathematics is the science of structure. The subject matter of arithmetic, for example, is the natural number structure, the pattern common to any countably infinite system of objects with a distinguished initial object and a successor relation that satisfies the induction principle. The essence of each natural number is its relation to the other natural numbers. One way to understand structuralism is to reify structures as ante rem universals. This would be a platonism concerning mathematical objects, which are the places within such structures. Alternatively, one can take an eliminative, in re approach, and understand talk of structures as shorthand for talk of systems of objects or, invoking modality, talk of possible systems of objects. Shapiro argues that although the realist, ante rem approach is the most perspicuous, in a sense, the various accounts are equivalent. Along the way, the ontological and epistemological aspects of the structuralist philosophies are assessed. One key aspect is to show how each philosophy deals with reference to mathematical objects. The view is tentatively extended to objects generally: to science and ordinary discourse.

Many mathematicians understand their work as an effort to describe the denizens and features of an abstract mathematical world or worlds. Most philosophers of mathematics consider views of this sort highly problematic, largely due to two stark difficulties laid out by Benacerraf: first, if mathematical things are abstract, and thus not to be found in space and time, how can we come to know anything about them? Second, how can mathematics be the study of certain particular things, when all that seems to matter mathematically are various structural features and relations? The goal of this book is to develop a philosophically defensible version of the mathematician's pre‐theoretic realism (sometimes called ‘Platonism’) about mathematical things. Beginning from an analysis of the strengths and weaknesses of Quine's and Gödel's versions of mathematical realism, I propose an alternative called ‘set theoretic realism’ and argue that it avoids both of Benacerraf's problems. In their place, I raise a new problem: given that some open questions of mathematics (like Cantor's Continuum Hypothesis) cannot be settled on the basis of the standard axioms, how are we rationally to evaluate new candidates for axiomatic status (such as Gödel's Axiom of Constructibility or various large cardinal axioms)? Set theoretic realism and its realistic cousins are not the only positions that face this important new challenge—various popular versions of nominalism and structuralism do as well—which suggests that it taps into a fundamental issue.

Since the 1960s, there has been a vigorous and ongoing debate about structuralism in Englishspeaking philosophy of mathematics. But structuralist ideas and methods go back further in time; that is, there is a rich prehistory to this debate, also in the German and Frenchspeaking literature. In the present collection of essays, this prehistory is explored in a twofold way: by reconsidering various mathematicians in the 19th and early 20th centuries (Grassmann, Dedekind, Pasch, Klein, Hilbert, Noether, Bourbaki, and Mac Lane) who contributed to structuralism in a methodological sense; and by reexamining a range of philosophical reflections on such contributions during the same period (also by Peirce, Poincaré, Russell, Cassirer, Bernays, Carnap, and Quine), which led to suggestions about logical, epistemological, and metaphysical aspects that remain relevant today. Overall, the collection makes evident that structuralism has deep roots in the history of modern mathematics, that mathematical and philosophical views about it have often been closely intertwined, and that the range of philosophical options available in this context is significantly richer than a mere focus on current debates may make one believe.

This book is both a history of philosophy of logic told from the Kantian viewpoint and a reconstruction of Kant’s theory of logic from a historical perspective. Kant’s theory represents a turning point in a history of philosophical debates over the following questions: (1) Is logic a science, instrument, standard of assessment, or mixture of these? (2) If logic is a science, what is the subject matter that differentiates it from other sciences, particularly metaphysics? (3) If logic is a necessary instrument to all philosophical inquiries, how is it so entitled? (4) If logic is both a science and an instrument, how are these two roles related? Kant’s answer to these questions centers on three distinctions: general versus particular logic, pure versus applied logic, pure general logic versus transcendental logic. The true meaning and significance of each distinction becomes clear, this book argues, only if we consider two factors. First, Kant was mindful of various historical views on how logic relates to other branches of philosophy (viz. metaphysics and physics) and to the workings of common human understanding. Second, he first coined “transcendental logic” while struggling to secure metaphysics as a proper “science,” and this conceptual innovation would in turn have profound implications for his mature theory of logic. Against this backdrop, the book reassesses the place of Kant’s theory in the history of philosophy of logic and highlights certain issues that are still debated today, such as normativity of logic and the challenges posed by logical pluralism.

This book tackles the logic of plural terms (‘Whitehead and Russell’, ‘the men who wrote Principia Mathematica’, ‘Henry VIII's wives’, ‘the real numbers’, ‘√—1’, ‘they’); plural predicates (‘surrounded the fort’, ‘are prime’, ‘are consistent’, ‘imply’); and plural quantification (‘some things’, ‘any things’). Current logic is singularist: it only allows terms to stand for at most one thing. By contrast, the foundational thesis of this book is that a particular term may legitimately stand for several things at once, in other words, there is such a thing as genuinely plural denotation. Plural logic is logic based on plural denotation. The book begins by making the case for taking plural phenomena seriously, and argues, by eliminating rival singularist strategies, that the only viable response is to adopt a plural logic. The subsequent development of the conceptual ground includes the distinction between distributive and collective predicates, the theory of plural descriptions, multivalued functions, and lists. A formal system of plural logic is then presented in three stages, before being applied to Cantorian set theory as an illustration. A system of higherlevel plural logic is then outlined. It bears a striking similarlty to the set theory.

Core Logic has unusual philosophical, prooftheoretic, metalogical, computational, and revisiontheoretic virtues. It is an elegant kernel lying deep within Classical Logic, a canon for constructive and relevant deduction furnishing faithful formalizations of informal constructive mathematical proofs. Its classicized extension provides likewise for nonconstructive mathematical reasoning. Confining one’s search to core proofs affords automated reasoners great gains in efficiency. All logicosemantical paradoxes involve only core reasoning. Core proofs are in normal form, and relevant in a highly exigent ‘vocabularysharing’ sense never attained before. Essential advances on the traditional Gentzenian treatment are that core natural deductions are isomorphic to their corresponding sequent proofs, and make do without the structural rules of Cut and Thinning. This ensures relevance of premises to conclusions of proofs, without loss of logical completeness. Every core proof converts any verifications of its premises into a verification of its conclusion. Core Logic makes one reassess the dogma of ‘unrestricted’ transitivity of deduction, because any core ‘restriction’ of transitivity ensures a more than compensatory payoff of epistemic gain: A core proof of A from X and one of B from {A}∪Y effectively determine a proof of B or of absurdity from some subset of X∪Y. The primitive introduction and elimination rules governing the logical operators in Core Logic are subtly different from Gentzen’s. They are obtained by smoothly extrapolating protean rules for determining truth values of sentences under interpretations. Core rules are inviolable: One needs all of them in order to revise beliefs rationally in light of new evidence.

This book aims to provide a solution to the semantic paradoxes. It argues for a unified solution to the paradoxes generated by the concepts of reference or denotation, predicate extension, and truth. The solution makes two main claims. The first is that our semantic expressions ‘denotes’, ‘extension’, and ‘true’ are contextsensitive. The second, inspired by a brief, tantalizing remark of Gödel’s, is that these expressions are significant everywhere except for certain singularities, in analogy with division by zero. A formal theory of singularities is presented and applied to a wide variety of versions of the definability paradoxes, Russell’s paradox, and the Liar paradox. The book argues that the singularity theory satisfies the following desiderata: it recognizes that the proper setting of the semantic paradoxes is natural language, not regimented formal languages; it minimizes any revision to our semantic concepts; it respects as far as possible Tarski’s intuition that natural languages are universal; it responds adequately to the threat of revenge paradoxes; and it preserves classical logic and semantics. The book examines the consequences of the singularity theory for deflationary views of our semantic concepts, and concludes that if we accept the singularity theory, we must reject deflationism.

What is a scientific theory? Is it a set of propositions? Or a family of models? Or is it some kind of abstract artefact? These options are examined in the context of a comparison between theories and artworks. On the one hand, theories are said to be like certain kinds of paintings, in that they play a representational role; on the other, they are compared to musical works, insofar as they can be multiply presented. I shall argue that such comparisons should be treated with care and that all of the above options face problems. Instead, I suggest, we should adopt a form of eliminativism towards theories, in the sense that a theory should not be regarded as any thing. Nevertheless, we can still talk about them and attribute certain qualities to them, where that talk is understood to be made true by certain practices. This shift to practices as truthmakers for theory talk then has certain implications for how we regard theories in the realism debate and in the context of the nature and role of representation in science.

Our conception of logical space is the set of distinctions we use to navigate the world. This book defends the idea that one’s conception of logical space is shaped by one’s acceptance or rejection of ‘just is’statements: statements like ‘to be composed of water just is to be composed of H2O’, or ‘for the number of the dinosaurs to be Zero just is for there to be no dinosaurs’. The resulting picture is used to articulate a conception of metaphysical possibility that does not depend on a reduction of the modal to the nonmodal, and to develop a trivialist philosophy of mathematics, according to which the truths of pure mathematics have trivial truthconditions.

The philosophy of modality investigates necessity and possibility, and related notions — are they objective features of mindindependent reality? If so, are they irreducible, or can modal facts be explained in other terms? This book presents new work on modality by established leaders in the field and by upandcoming philosophers. Between them, the chapters address fundamental questions concerning realism and antirealism about modality, the nature and basis of facts about what is possible and what is necessary, the nature of modal knowledge, modal logic and its relations to necessary existence and to counterfactual reasoning. The general introduction locates the individual contributions in the wider context of the contemporary discussion of the metaphysics and epistemology of modality.