Stanford University

The Philosophy of Computer Science (Stanford Encyclopedia of Philosophy)

First published Tue Aug 20, 2013; substantive revision Thu Jan 19, 2017

The philosophy of computer science is concerned with those
ontological, methodological, and ethical issues that arise from within
the academic discipline of computer science as well as from the
practice of software development. Thus, the philosophy of computer
science shares the same philosophical goals as the philosophy of
mathematics and the many subfields of the philosophy of science, such
as the philosophy of biology or the philosophy of the social sciences.
The philosophy of computer science also considers the analysis of
computational artifacts, that is, human-made
computing systems, and it focuses on methods involved in the design,
specification, programming, verification, implementation, and testing
of those systems. The abstract nature of computer programs and the
resulting complexity of implemented artifacts, coupled with the
technological ambitions of computer science, ensures that many of the
conceptual questions of the philosophy of computer science have
analogues in the
philosophy of mathematics,
the philosophy of empirical sciences, and the
philosophy of technology.
Other issues characterize the philosophy of computer science only. We
shall concentrate on three tightly related groups of topics that form
the spine of the subject. First we discuss topics related to the
ontological analysis of computational artifacts, in Sections 1–5
below. Second, we discuss topics involved in the methodology and
epistemology of software development, in Sections 6–9 below.
Third, we discuss ethical issues arising from computer science
practice, in Section 10 below. Applications of computer science are
briefly considered in section 11.



1. Computational Artifacts

Computational artifacts underpin our Facebook pages, control air
traffic around the world, and ensure that we will not be too surprised
when it snows. They have been applied in algebra, car manufacturing,
laser surgery, banking, gastronomy, astronomy, and astrology. Indeed,
it is hard to find an area of life that has not been fundamentally
changed and enhanced by their application. But what is it that is
applied? What are the things that give substance to such applications?
The trite answer is the entities that computer scientists construct,
the artifacts of computer science, computational artifacts,
if you will. Much of the philosophy of computer science is concerned
with their nature, specification, design, and construction.

1.1 Duality

Folklore has it that computational artifacts fall into two camps:
hardware and software. Presumably, software includes compilers and
natural language understanding systems, whereas laptops and tablets
are hardware. But how is this distinction drawn: How do we delineate
what we take to be software and what we take to be hardware?

A standard way identifies the distinction with the abstract-physical
one (see the entry on
abstract objects),
where hardware is taken to be physical and software to be abstract.
Unfortunately, this does not seem quite right. As Moor (1978) points
out, programs, which are normally seen as software, and therefore
under this characterization abstract, may also be physical devices. In
particular, programs were once identified with sequences of physical
lever pulls and pushes. There are different reactions to this
observation. Some have suggested there is no distinction. In
particular, Suber (1988) argues that hardware is a special case of
software, and Moor (1978) argues that the distinction is ontologically
insignificant. On the other hand, Duncan (2011) insists that there is
an important difference but it is one that can only be made within an
ontological framework that supports finer distinctions than the simple
abstract-physical one (e.g., B. Smith 2012). Irmak (2012) also thinks
that software and hardware are different: software is an abstract
artifact, but apparently not a standard one, because it has temporal
properties.

Whether or not the software-hardware distinction can be made
substantial, most writers agree that, although a program can be taken
as an abstract thing, it may also be cashed out as a sequence of
physical operations. Consequently, they (e.g., Colburn 2000; Moor
1978) insist that programs have a dual nature: they have both an
abstract guise and a physical one. Indeed, once this is conceded, it
would seem to apply to the majority of computational artifacts. On the
one hand, they seem to have an abstract guise that enables us to
reflect and reason about them independently of any physical
manifestation. This certainly applies to abstract data types (Cardelli
& Wegner 1985). For example, the list abstract data type consists
of the carrier type together with operations that support the
formation and manipulation of lists. Even if not made explicit, these
are determined by several axioms that fix their properties: e.g., if
one adds an element to the head of a list to form a new list, and then
removes the head, the old list is returned. Similarly, an abstract
stack is determined by axioms that govern push and
pop operations. Using such properties, one may reason about
lists and stacks in a mathematical way, independently of any concrete
implementation. And one needs to. One cannot design nor program
without such reasoning; one cannot construct correct programs without
reasoning about what the programs are intended to do. If this is
right, computational artifacts have an abstract guise that is
separable from their physical realization or implementation. Indeed,
this requirement to entertain abstract devices to support reasoning
about physical ones is not unique to computer science.

On the other hand, they must have a physical implementation that
enables them to be used as things in the physical world. This is
obviously true of machines, but it is equally so for programs:
Programmers write programs to control physical devices. A program or
abstract machine that has no physical realization is of little use as
a practical device for performing humanly intractable computations.
For instance, a program that monitors heart rate must be underpinned
by a physical device that actually performs the task. The computer
scientist Dijkstra puts it as follows.

A programmer designs algorithms, intended for mechanical execution,
intended to control existing or conceivable computer equipment.
(Dijkstra 1974: 1)

On the duality view, computer science is not an abstract mathematical
discipline that is independent of the physical world. To be used,
these things must have physical substance. And once this observation
is made, there is a clear link with a central notion in the philosophy
of technology (Kroes 2010; Franssen et al. 2010), to which we now
turn.

1.2 Technical Artifacts

Technical artifacts include all the common objects of everyday life
such as toilets, paper clips, tablets, and dog collars. They are
intentionally produced things. This is an essential part of being a
technical artifact. For example, a physical object that
accidentally carries out arithmetic is not by itself a calculator.
This teleological aspect distinguishes them from other physical
objects, and has led philosophers to argue that technical artifacts
have a dual nature fixed by two sets of properties (e.g., Kroes 2010;
Meijers 2001; Thomasson 2007; Vermaas & Houkes 2003): functional
properties and structural properties.

Functional properties say what the artifact does. For example, a
kettle is for boiling water, and a car is for transportation. On the
other hand, structural properties pertain to its physical makeup. They
include its weight, color, size, shape, chemical constitution, etc.
For example, we might say that our car is red and has white seats.

The notion of a technical artifact will help to conceptualize and
organize some of the central questions and issues in the philosophy of
computer science. We begin with a concept that underpins much of the
activity of the subject. Indeed, it is the initial expression of
functional properties.

2. Specification and Function

In computer science, the function of an artifact is initially
laid out in a (functional) specification (Sommerville 2016 [1982]; Vliet
2008). Indeed, on the way to a final device, a whole series of
specification-artifact pairs of varying degrees of abstractness come
into existence. The activities of specification, implementation and
correctness raise a collection of overlapping conceptual questions and
problems (B.C. Smith 1985; Turner 2011; Franssen et al. 2010).

2.1 Definition

Specifications are expressed in a variety of ways, including ordinary
vernacular. But the trend in computer science has been towards more
formal and precise forms of expression. Indeed, specialized languages
have been developed that range from those designed primarily for
program specification (e.g., VDM, Jones 1990 [1986]; Z, Woodcock & Davies
1996; B, Abrial 1996) and wide spectrum languages such UML (Fowler
2003), to specialized ones that are aimed at architectural description
(e.g., Rapide, Luckham 1998; Darwin, Distributed Software Engineering
1997; Wright, Allen 1997). They differ with respect to the their
underlying ontologies and their means of articulating
requirements.

Z is based upon predicate logic and set theory. It is largely employed
for the specification of suites of individual program modules or
simple devices. UML (Fowler 2003) has a very rich ontology and a wide
variety of expression mechanisms. For example, its class language
allows the specification of software patterns (Gamma et al. 1994). In
general, an architectural description language is used to precisely
specify the architecture of a software system (Bass et al. 2003 [1997]).
Typically, these languages employ an ontology that includes notions
such as components, connectors, interfaces
and configurations. In particular, architectural descriptions
written in Rapide, Darwin, or Wright are precise expressions in
formalisms that are defined using an underlying mathematical
semantics.

But what is the logical function of the expressions of these
languages? On the face of it, they are just expressions in a formal
language. However, when the underlying ontology is made explicit, each
of these languages reveals itself to be a formal ontology that may be
naturally cast as a type theory (Turner 2009a). Under this
interpretation, these expressions are stipulative definitions (Gupta
2012). As such, each defines a new abstract object within the formal
ontology of its system.

2.2 Definitions as Specifications

However, taken by itself a definition need not be a specification of
anything; it may just form part of a mathematical exploration. So when
does a definition act as a specification? Presumably, just in case the
definition is taken to point beyond itself to the construction of an
artifact. It is the intentional act of giving governance of the
definition over the properties of a device or system that turns a mere
definition into a specification. The definition then determines
whether or not the device or system has been built correctly. It
provides the criteria of correctness and malfunction. From this
perspective, the role of specification is a normative one. If one asks
whether the device work, it is the definition functioning as a
specification that tells us whether it does. Indeed, without it, the
question would be moot. At any level of abstraction (see
§8.1),
the logical role of specification is always the same: It provides a
criterion for correctness and malfunction. This is the perspective
argued for by Turner (2011). Indeed, this normative role is taken to
be part of any general theory of function (Kroes 2012).

It should go without saying that this is an idealization. A
specification is not fixed throughout the design and construction
process. It may have to be changed because a client changes her mind
about the requirements. Furthermore, it may turn out for a variety of
reasons that the artifact is impossible to build. The underlying
physical laws may prohibit matters. There may also be cost limitations
that prevent construction. Indeed, the underlying definition may be
logically absurd. In these cases, the current specification will have
to be given up. But the central normative role of specification
remains intact.

Unlike functional descriptions, specifications are taken to be
prescribed in advance of the artifact construction; they guide the
implementer. This might be taken to suggest a more substantive role
for specification i.e., to provide a method for the construction of
the artifact. However, the method by which we arrive at the artifact
is a separate issue from its specification. The latter dictates no
such method. There is no logical difference between a functional
specification and functional description; logically they both provide
a criterion of correctness.

2.3 Abstract Artifacts

Software is produced in a series of layers of decreasing levels of
abstraction, where in the early layers both specification and artifact
are abstract (Brooks 1995; Sommerville 2016 [1982]; Irmak 2012). For example,
a specification written in logical notation might be taken to be a
specification of a linguistic program. In turn, the linguistic
program, with its associated semantics, might be taken as the
specification of a physical device. In other words, we admit abstract
entities as artifacts. This is a characteristic feature of software
development (Vliet 2008). It distinguishes it from technology in
general. The introduction of abstract intermediate artifacts is
essential (Brooks 1995; Sommerville 2016 [1982]). Without them logically
complex computational artifacts would be impossible to construct.

So what happens to the duality thesis? It still holds good, but now
the structural description does not necessarily provide physical
properties but another abstract device. For example, an abstract stack
can act as the specification of a more concrete one that is now given
a structural description in a programming language as an array. But
the array is itself not a physical thing, it is an abstract one. Its
structural description does not use physical properties but abstract
ones, i.e., axioms. Of course, eventually, the array will get
implemented in a physical store. However, from the perspective of the
implementer who is attempting to implement stacks in a programming
language with arrays as a data type, the artifact is the abstract
array of the programming language. Consequently, the duality thesis
must be generalized to allow for abstract artifacts.

2.4 Theories of Function

Exactly how the physical and intentional conceptualizations of our
world are related remains a vexing problem to which the long history
of the mind-body problem in philosophy testifies. This situation also
affects our understanding of technical artifacts: a conceptual
framework that combines the physical and intentional (functional)
aspects of technical artifacts is still lacking. (Kroes & Meijers
2006: 2)

The literature on technical artifacts (e.g., Kroes 2010; Meijers 2001;
Thomasson 2007; Vermaas & Houkes 2003) contains two main theories
about how the two conceptualizations are related: causal-role theories
and intentional ones.

Causal-role theories insist that actual physical capacities determine
function. Cummins’s theory of functional analysis (Cummins 1975)
is an influential example of such a theory. The underlying intuition
is that, without the physical thing and its actual properties, there
can be no artifact. The main criticism of these theories concerns the
location of any correctness criteria. If all we have is the physical
device, we have no independent measure of correctness (Kroes 2010):
The function is fixed by what the device actually does.

Causal role theories… have the tendency to let functions
coincide with actual physical capacities: structure and function
become almost identical. The main drawback of this approach is that it
cannot account for the malfunctioning of technical artifacts: an
artifact that lacks the actual capacity for performing its intended
function by definition does not have that function. The intentions
associated with the artifact have become irrelevant for attributing a
function. (Kroes 2010: 3)

This criticism has the same flavor as that made by Kripke (1982) in
his discussion of rule following.

Intentional theories insist that it is agents who ascribe functions to
artifacts. Objects and their components possess functions only insofar
as they contribute to the realization of a goal. Good examples of this
approach are McLaughlin (2001) and Searle (1995).

But how exactly does the function get fixed by the desires of an
agent? One interpretation has it that the function is determined by
the mental states of the agents, i.e., the designers and users of
technical artifacts. In their crude form such theories have difficulty
accounting for how they impose any constraints upon the actual thing
that is the artifact.

If functions are seen primarily as patterns of mental states, on the
other hand, and exist, so to speak, in the heads of the designers and
users of artifacts only, then it becomes somewhat mysterious how a
function relates to the physical substrate in a particular artifact.
(Kroes 2010: 2)

For example, how can the mental states of an agent fix the function of
a device that is intended to perform addition? This question is posed
in a rather different context by Kripke.

Given … that everything in my mental history is compatible both
with the conclusion that I meant plus and with the conclusion that I
meant quus, it is clear that the skeptical challenge is not really an
epistemological one. It purports to show that nothing in my mental
history of past behavior—not even what an omniscient God would
know—could establish whether I meant plus or quus. But then it
appears to follow that there was no fact about me that constituted my
having meant plus rather than quus. (Kripke 1982: 21)

Of course, one might also insist that the artifact is actually in
accord with the specification, but this does not help if the
expression of the function is only located in the mental states of an
agent. This version of the intentional theory is really a special case
of a causal theory where the agent’s head is the
physical device in which the function is located.

However, there is an alternative interpretation of the intentional
approach. On his commentary on Wittgenstein’s notion of acting
intentionally (Wittgenstein 1953), David Pears suggests that anyone
who acts intentionally must know two things. Firstly, she must know
what activity she is engaged in. Secondly, she must know when she has
succeeded (Pears 2006). According to this perspective, establishing
correctness is an externally observable, rule-based activity. The
relation between the definition and the artifact is manifest in using
the definition as a canon of correctness for the device. I must be
able to justify my reasons for thinking that it works: If I am asked
if it works I must be able to justify that it does with reference to
the abstract definition. The content of the function is laid out in
the abstract definition, but the intention to take it as a
specification is manifest in using it as one
(§2.2).

3. Implementation

Broadly speaking an implementation is a realization of a
specification. Examples includes the implementation of a UML
specification in Java, the implementation of an abstract algorithm as
a program in C, the implementation of an abstract data type in
Miranda, or the implementation of a whole programming language.
Moreover, implementation is often an indirect process that involves
many stages before physical bedrock, it involves a
specification-artifact pairing and a notion of implementation. But
what is an implementation? Is there just one notion or many?

3.1 What Is Implementation?

The most detailed philosophical study of implementation is given by
Rapaport (1999, 2005). He argues that implementation involves two
domains: a syntactic one (the abstraction) and a semantic one (the
implementation). Indeed, he suggests that a full explication of the
notion requires a third hidden term, a medium of implementation: (I)
is an implementation of (A) in medium (M). Here (I) is the
semantic component, (A) is the abstraction, and (M) is the medium
of implementation. He allows for the target medium to be abstract or
physical. This is in line with the claim that artifacts may be
abstract or concrete.

Superficially, this seems right. In all the examples cited, there is a
medium of implementation in which the actual thing that is the
implementation is carved out. Perhaps the clearest example is the
implementation of a programming language. Here, the syntactic domain
is the actual language and the semantic one its interpretation on an
abstract machine: the medium of interpretation. He suggests that we
implement an algorithm when we express it in a computer programming
language, and we implement an abstract data type when we express it as
a concrete one. Examples that he does not mention might include the
UML definition of design patterns implemented in Java (Gamma et al.
1994).

He further argues that there is no intrinsic difference between which
of the domains is semantic and which is syntactic. This is determined
by the asymmetry of the implementation mapping. For example, a
physical computer process that implements a program plays the role of
the semantics to the linguistic program, while the same linguistic
program can play the role of semantic domain to an algorithm. This
asymmetry is parallel to that of the specification-artifact
connection. On the face of it, there is little to cause any
dissension. It is a straightforward description of the actual use of
the term implementation. However, there is an additional conceptual
claim that is less clear.

3.2 Implementation as Semantic Interpretation

Apparently, the semantic domain, as its name suggests, is always taken
to be a semantic representation of the syntactic
one; it closes a semantic gap between the abstraction and the
implementation in that the implementation fills in details. This is a
referential view of semantics in that the syntactic domain refers to
another domain that provides its meaning. Indeed, there is a strong
tradition in computer science that takes referential or denotational
semantics as fundamental (Stoy 1977; Milne & Strachey 1976; Gordon
1979). We shall examine this claim later when we consider the
semantics of programming languages in more detail
(§4).
For the moment, we are only concerned with the central role of any
kind of semantics.

One view of semantics insists that it must be normative. Although the
exact form of the normative constraint (Glüer & Wikforss
2015; Miller & Wright
2002) is debated, there is a good deal of agreement on a minimal
requirement: a semantic account must fix what it is to use an
expression correctly.

The fact that the expression means something implies that there is a
whole set of normative truths about my behavior with that expression;
namely, that my use of it is correct in application to certain objects
and not in application to others…. The normativity of meaning
turns out to be, in other words, simply a new name for the familiar
fact that, regardless of whether one thinks of meaning in
truth-theoretic or assertion-theoretic terms, meaningful expressions
possess conditions of correct use. Kripke’s insight was to
realize that this observation may be converted into a condition of
adequacy on theories of the determination of meaning: any proposed
candidate for the property in virtue of which an expression has
meaning, must be such as to ground the “normativity” of
meaning—it ought to be possible to read off from any alleged
meaning constituting property of a word, what is the correct use of
that word. (Boghossian 1989: 513)

On the assumption that this minimal requirement has to be satisfied by
any adequate semantic theory, is implementation always, or even ever,
semantic interpretation? Are these two notions at odds with each
other?

One standard instance of implementation concerns the interpretation of
one language in another. Here the abstraction and the semantic domain
are both languages. Unfortunately, this does not provide a criterion
of correctness unless we have already fixed the semantics of the
target language. While translating between languages is taken to be
implementation, indeed a paradigm case, it is not, on the present
criterion, semantic interpretation. It only satisfies the correctness
criterion when the target language has an independently given notion
of correctness. This may be achieved in an informal or in a
mathematical way. But it must not end in another uninterpreted
language. So this paradigm case of implementation does not appear to
satisfy the normative constraints required for semantic
interpretation. On the other hand, Rapaport (1995) argues that
providing a recursive definition of implementation requires a base
case, that is, the process must end in an uninterpreted language.
However, such a language can be interpreted on itself, mapping each
symbol either on itself or on different symbols of that language.

Next consider the case where the abstraction is a language and the
semantic medium is set theory. This would be the case with
denotational semantics (Stoy 1977). This does provide a notion of
correctness. Our shared and agreed understanding of set theory
provides this. Unfortunately, it would not normally be taken as an
implementation. Certainly, it would not, if an implementation is
something that is eventually physically realizable.

Now consider the case where the syntactic component is an abstract
stack and the semantic one is an array. Here we must ask what it means
to say that the implementation is correct. Does the medium of the
array fix the correct use of stacks? It would seem not: The array does
not provide the criteria for deciding whether we have the correct
axioms for stacks or whether we have used them correctly in a
particular application. Rather, the stack is providing the correctness
criteria for the implementation that is the array. Instead, the axioms
provide the fundamental meanings of the constructs. While the array is
an implementation of the stack, it does not provide it with a notion
of correctness: The cart and the horse have been interchanged.

Finally, suppose the semantic domain is a physical machine and the
syntactic one is an abstract one. The suggestion is that the physical
machine provides a semantic interpretation of the abstract one. But
again, a semantic interpretation must provide us with a notion of
correctness and malfunction, and there are compelling arguments
against this that are closely related to the causal theories of
function
(§2.4).
This issue will be more carefully examined in section
(§4)
where we consider programming language semantics.

Given that a semantic account of a language must supply correctness
criteria, and that the term semantics is to have some bite, these are
serious obstacles for the view that implementation is semantic
interpretation. There are several phenomena all rolled into one. If
these objections are along the right lines, then the relationship
between the source and target is not semantic interpretation. Of
course, one may counter all this by arguing against the correctness
requirement for semantic theory.

3.3 Specification and Implementation

An alternative analysis of implementation is implicit in Turner (2014,
2012). Consider the case where the data type of finite sets is
implemented in the data types of lists. Each of these structures is
governed by a few simple axioms. The implementation represents finite
sets as lists, the union operation on sets as list concatenation, and
equality between sets as extensional equality on lists etc. This is a
mathematical relationship where the axioms for sets act as a
specification of the artifact, which in this case is implemented in
the medium of lists. It would appear that the logical connection
between the two is that of specification and artifact. The mapping
does not have to be direct i.e., there does not have to be a simple
operation-to-operation correspondence, but the list properties of the
implemented operations must satisfy the given set axioms. In standard
mathematical terms, the list medium must provide a mathematical model
(in the sense of model theory, W. Hodges 2013) of the set axioms. The
case where one language is implemented in another is similar, and
fleshed out by the semantic definitions of the two languages.

Finally, consider the case where the medium of implementation is a
physical device e.g., an abstract stack is implemented as a physical
one. Once again the abstract stack must provide the correctness
criteria for the physical device. This is what happens in practice. We
check that the physical operations satisfy the abstract demands given
by the axioms for stacks. There are issues here that have to do with
the adequacy of this notion of correctness. We shall discuss these
when we more carefully consider the computer science notion of
correctness
(§7.4).

If this analysis is along the right lines, implementation is best
described as a relation between specification and artifact.
Implementation is not semantic interpretation; indeed, it requires an
independent semantic account in order to formulate a notion of
implementation correctness. So, what is taken to be semantic
interpretation in computer science?

4. Semantics

How is a semantic account of a programming language to be given? What
are the main conceptual issues that surround the semantic enterprise?
There are many different semantic candidates in the literature (Gordon
1979; Gunter 1992; Fernández 2004; Milne & Strachey 1976).
One of the most important distinctions centers upon the difference
between operational and denotational semantics (Turner 2007; White
2003).

4.1 Two Kinds of Semantic Theory

Operational semantics began life with Landin (1964). In its logical
guise (Fernández 2004) it provides a mechanism of evaluation
where, in its simplest form, the evaluation relation is represented as
follows.

[P Downarrow c]

This expresses the idea that the program (P)
converges to the canonical form given by (c). The classical
case of such a reduction process occurs in the lambda calculus where
reduction is given by the reduction rules of the calculus, and
canonical forms are its normal forms i.e., where none of the
reduction rules apply. The following is a simple example:
[z(lambda x.y)]

This is usually called big step semantics. It is normally
given in terms of rules that provide the evaluation of a complex
program in terms of the evaluation of its parts. For example, a simple
rule for sequencing ((degr)) would take the form

[
frac{PDownarrow c quad QDownarrow d}
{Pdegr Q Downarrow cdegr d}
]

These canonical or normal forms are other terms in the programming
language which cannot be further reduced by the given rules. But they
are terms of the language. For this reason, this operational approach
is often said to be unsatisfactory. According to this criticism, at
some point in the interpretation process, the semantics for a formal
language must be mathematical.

We can apparently get quite a long way expounding the properties of a
language with purely syntactic rules and transformations… One
such language is the Lambda Calculus and, as we shall see, it can be
presented solely as a formal system with syntactic conversion rules
… But we must remember that when working like this all we are
doing is manipulating symbols-we have no idea at all of what we are
talking about. To solve any real problem, we must give some semantic
interpretation. We must say, for example, “these symbols
represent the integers”. (Stoy 1977: 9)

In contrast, operational semantics is taken to be syntactic.
In particular, even if one of them is in canonical form, the relation
(PDownarrow c) relates syntactic objects. This does not get at what
we are talking about. Unless the constants of the language have
themselves an independently given mathematical meaning, at no point in
this process do we reach semantic bedrock: we are just reducing one
syntactic object to another, and this does not yield a normative
semantics. This leads to the demand for a more mathematical
approach.

Apparently, programming languages refer to (or are notations for)
abstract mathematical objects, not syntactic ones (Strachey 2000;
McGettrick 1980; Stoy 1977). In particular, denotational semantics
provides, for each syntactic object (P), a mathematical
one. Moreover, it generally does this in a compositional way: Complex
programs have their denotations fixed in terms of the denotations of
their syntactic parts. These mathematical objects might be set
theoretic, category theoretic, or type theoretic. But whichever method
is chosen, programs are taken to refer to abstract mathematical
things. However, this position relies on a clear distinction between
syntactic and mathematical objects.

4.2 Programming Languages as Axiomatic Theories

Mathematical theories such as set theory and category theory are
axiomatic theories. And it is this that makes them mathematical. This
is implicit in the modern axiomatic treatment of mathematics
encouraged by (Bourbaki 1968) and championed by Hilbert (1931).

It is worth pointing out that the axiomatic account, as long as it is
precise and supports mathematical reasoning, does not need to be
formal. If one accepts this as a necessary condition for mathematical
status, does it rule out operational accounts? Prima facie it
would seem so. Apparently, programs are reduced to canonical constants
with no axiomatic definitions. But Turner (2009b, 2010) argues this is
to look in the wrong place for the axiomatization: the latter resides
not in the interpreting constants but in the rules of evaluation,
i.e., in the theory of reduction given by the axiomatic relation
(Downarrow).

Given that both denotational and operational semantics define matters
axiomatically, it should not matter which we take to define the
language as a formal mathematical theory. Unfortunately, they
don’t always agree: The notion of equality provided by the
operational account, although preserved by the denotational one, is
often more fine grained. This has led to very special forms of
denotational semantics based upon games (Abramsky & McCusker 1995;
Abramsky et al. 1994). However, it is clear that practitioners take
the operational account as fundamental, and this is witnessed by the
fact that they seek to devise denotational accounts that are in
agreement with the operational ones.

Not only is there no metaphysical difference between the set theoretic
account and the operational one, but the latter is taken to be the
definitive one. This view of programming languages is the perspective
of theoretical computer science: Programming languages, via their
operational definitions, are mathematical theories of computation.

However, programming languages are very combinatorial in nature. They
are working tools, not elegant mathematical theories; it is very hard
to explore them mathematically. Does this prevent them from being
mathematical theories? There has been very little discussion of this
issue in the literature; Turner (2010) and Strachey (2000) are
exceptions. On the face of it, Strachey sees them as mathematical
objects pure and simple. Turner is a little more cautious and argues
that actual programming languages, while often too complex to be
explored as mathematical theories, contain a core theory of
computation that may be conservatively extended to the full
language.

4.3 The Implementation of Programming Languages

However, Turner (2014) further argues that programming languages, even
at their core, are not just mathematical objects. He argues that they
are best conceptualized as technical artifacts. While their axiomatic
definition provides their function, they also require an
implementation. In the language of technical artifacts, a structural
description of the language must say how this is to be achieved: It
must spell out how the constructs of the language are to be
implemented. To illustrate the simplest case, consider the assignment
instruction.

[x := E]

A physical implementation might take the
following form.

  • Physically compute the value of (E).
  • Place the (physical token for) the value of (E) in the physical
    location named (x) any existing token of value to be replaced.

This is a description of how assignment is to be physically realized.
It is a physical description of the process of evaluation. Of course,
a complete description will spell out more, but presumably not what
the actual machine is made of; one assumes that this would be part of
the structural description of the underlying computer, the medium of
implementation. The task of the structural description is only to
describe the process of implementation on a family of similarly
structured physical machines. Building on this, we stipulate how the
complex constructs of the language are to be implemented. For example,
to execute commands in sequence we could add a physical stack that
arranges them for processing in sequence. Of course, matters are
seldom this straightforward. Constructs such as iteration and
recursion require more sophisticated treatment. Indeed, interpretation
and compilation may involve many layers and processes. However, in the
end there must be some interpretation into the medium of a physical
machine. Turner (2014) concludes that a programming language is a
complex package of syntax and semantics (function) together with the
implementation as structure.

Some have suggested that a physical implementation actually defines
the semantics of the language. Indeed, this is a common perspective in
the philosophy of computer science literature. We have already seen
that Rapaport (1999) sees implementation as a semantic interpretation.
Fetzer (1988) observes that programs have a different semantic
significance from theorems. In particular, he asserts:

…programs are supposed to possess a semantic significance that
theorems seem to lack. For the sequences of lines that compose a
program are intended to stand for operations and procedures that can
be performed by a machine, whereas the sequences of lines that
constitute a proof do not. (Fetzer 1988: 1059)

This seems to say that the physical properties of the implementation
contribute to the meaning of programs written in the language. Colburn
(2000) is more explicit when he writes that the simple assignment
statement (A := 13times 74) is semantically ambiguous between
something like the abstract account we have given, and the physical
one given as:

physical memory location (A) receives the value of physically
computing 13 times 74. (Colburn 2000: 134)

The phrase “physically computing” seems to imply that what
the physical machine actually does is semantically significant i.e.;
what it actually does determines or contributes to the meaning of
assignment. Is this to be taken to imply that to fix what assignment
means we have to carry out a physical computation? However, if an
actual physical machine is taken to contribute in any way to the
meaning of the constructs of the language, then their meaning is
dependent upon the contingencies of the physical device. In
particular, the meaning of the simple assignment statement may well
vary with the physical state of the device and with contingencies that
have nothing to with the semantics of the language, e.g., power cuts.
Under this interpretation, multiplication does not mean multiplication
but rather what the physical machine actually does when it simulates
multiplication. This criticism parallels that for causal theories of
function
(§2.4).

5. The Ontology of Programs

The nature of programs has been the subject of a good amount of
philosophical and legal reflection. What kinds of things are they? Are
they abstract (perhaps mathematical or symbolic) objects or concrete
physical things? Indeed, the legal literature even contains a
suggestion that programs constitute a new kind of (legal) entity
(§10.1).

The exact nature of computer programs is difficult to determine. On
the one hand, they are related to technological matters. On the other
hand, they can hardly be compared to the usual type of inventions.
They involve neither processes of a physical nature, nor physical
products, but rather methods of organization and administration. They
are thus reminiscent of literary works even though they are addressed
to machines. Neither industrial property law nor copyright law in
their traditional roles seems to be the appropriate instrument for the
protection of programs, because both protections were designed for and
used to protect very different types of creations. The unique nature
of the computer program has led to broad support for the creation of
sui generis legislation. (Loewenheim 1989: 1)

This highlights the curious legal status of programs. Indeed, it
raises tricky ontological questions about the nature of programs and
software: they appear to be abstract, even mathematical objects with a
complex structure, and yet they are aimed at physical devices. In this
section, we examine some of the philosophical issues that have arisen
regarding the nature of programs and software.

5.1 Programs as Mathematical Objects

What is the content of the claim that programs are mathematical
objects? In the legal literature, the debate seems to center on the
notion that programs are symbolic objects that can be formally
manipulated (Groklaw 2011, 2012—see
Other Internet Resources).
Indeed, there is a branch of theoretical computer science called
formal language theory that treats grammars as objects of mathematical
study (Hopcroft & Ullman 1969). While this does give some
substance to the claim, this is not the most important sense in which
programs are mathematical. This pertains to their semantics, where
programming languages are taken to be axiomatic theories
(§4.2).
This perspective locates programs as elements in a theory of
computation (Turner 2007, 2010).

5.2 Programs as Technical Artifacts

While agreeing that programs have an abstract guise, much of the
philosophical literature (e.g., Colburn 2000; Moor 1978) has it that
they also possess a concrete physical manifestation that facilitates
their use as the cause of computations in physical machines.
For example, Moor observes:

It is important to remember that computer programs can be understood
on the physical level as well as the symbolic level. The programming
of early digital computers was commonly done by plugging in wires and
throwing switches. Some analogue computers are still programmed in
this way. The resulting programs are clearly as physical and as much a
part of the computer system as any other part. Today digital machines
usually store a program internally to speed up the execution of the
program. A program in such a form is certainly physical and part of
the computer system. (Moor 1978: 215)

The following is of more recent origin, and more explicitly
articulates the duality thesis in its claim that software has both
abstract and physical guises.

Many philosophers and computer scientists share the intuition that
software has a dual nature (Moor 1978; Colburn 2000). It appears that
software is both an algorithm, a set of instructions, and a concrete
object or a physical causal process. (Irmak 2012: 3)

5.3 Abstract and Concrete

Anyone persuaded by the abstract-physical duality for programs is
under an obligation to say something about the relationship between
these two forms of existence. This is the major philosophical concern
and parallels the question for technical artifacts in general.

One immediate suggestion is that programs, as textual objects,
cause mechanical processes. The idea seems to be that somehow
the textual object physically causes the mechanical process. Colburn
(2000, 1999) denies that the symbolic text itself has any causal
effect; it is its physical manifestation, the thing on the disk, which
has such an effect. For him, software is a concrete
abstraction
that has a medium of description (the text, the
abstraction) and a medium of execution (e.g., a concrete
implementation in semiconductors). The duality is unpacked in a way
that is parallel to that found in the philosophy of mind (see the
entry on
dualism),
where the physical device is taken as a semantic interpretation of
the abstract one. This is close to the perspective of Rapaport (1999).
However, we have already alluded to problems with this approach
(§3.3).

A slightly different account can be found in Fetzer (1988). He
suggests that abstract programs are something like scientific
theories: A program is to be seen as a theory of its physical
implementation—programs as causal models. In
particular, the simple assignment statement and its semantics is a
theory about a physical store and how it behaves. If this is right,
and a program turns out not to be an accurate description of the
physical device that is its implementation, the program must be
changed: If the theory that is enshrined in the program does not fit
the physical device, it should be changed. But this does not seem to
be what happens in practice. While the program may have to be changed,
this is not instigated by any lack of accord with its physical
realization, but by an independent abstract semantics for assignment.
If this is correct, the abstract semantics appears not to be a theory
of its concrete implementation.

The alternative picture has it that the abstract program (determined
by its semantics) provides the function of the artifact, and the
physical artifact, or rather its description, provides its structure.
It is the function of the program, expressed in its semantics, that
fixes the physical implementation and provides the criteria of
correctness and malfunction. Programs as computational artifacts have
both an abstract aspect that somehow fixes what they do and a physical
aspect that enables them to cause physical things to happen.

5.4 Programs and Specifications

What is the difference between programming and specification? One
suggestion is that a specification tells us what it is to do without
actually saying how to do it. For instance, the following is a
specification written in VDM (Jones 1990 [1986]).

SQRTP ((x):real, (y):real)

  • Pre: (x ge 0)
  • Post: (y* y = x) and (y ge 0)

This is a specification of a square root function with the
precondition that the input is positive. It is a functional
description in that it says what it must do without saying how it is
to be achieved. One way to unpack this whathow
difference is in terms of the descriptive-imperative distinction.
Programs are imperative and say how to achieve the goal, whereas
specifications are declarative and only describe the input/output
behavior of the intended program. Certainly, in the imperative
programming paradigm, this seems to capture a substantive difference.
But it is not appropriate for all. For example, logic and functional
programming languages (Thompson 2011) are not obviously governed by
it. The problem is that programming languages have evolved to a point
where this way of describing the distinction is not marked by the
style or paradigm of the programming language. Indeed, in practice, a
program written in Haskell (Thompson 2011) could act as a
specification for a program written in C (Huss 1997,
Other Internet Resources).

A more fundamental difference concerns the direction of governance,
i.e., which is the normative partner in the relationship and which is
the submissive one. In the case of the specification of the square
root function, the artifact is the linguistic program. When the
program is taken as the specification, the artifact is the next level
of code, and so on down to a concrete implementation. This is in
accord with Rapaport (2005) and his notion of the asymmetry of
implementation.

6. Verification

One of the crucial parts of the software development process is
verification: After computational artifacts have been specified,
instantiated into some high-level programming language, and
implemented in hardware, developers are involved in the activities of
evaluating whether those artifacts are correct with respect to the
provided program specifications. Correctness evaluation methods can be
roughly sorted into two main groups: formal verification and testing.
Formal verification (Monin 2003) involves some
mathematical proof of correctness, software testing (Ammann &
Offutt 2008) rather implies running the implemented program and
observing whether performed executions comply or do not comply with
the advanced specifications on the behaviors of such program. In many
practical cases, formal methods and testing are used together for
verification purposes (see for instance Callahan et al. 1996).

6.1 Models and Theories

Formal verification methods include the construction of
representations of the piece of software to be verified
against some set of program specifications. In theorem
proving
(see Van Leeuwen 1990), programs are represented in terms
of axiomatic systems and a set of rules of inference for
programs’ transition conditions; a proof of correctness is
provided by deriving opportunely formalized specifications from those
set of axioms. In model checking (Baier & Katoen 2008), a
program is represented in terms of some state transition system, the
program’s property specifications are represented in terms of
temporal logic formulas (Kröger & Merz 2008), and a proof of
correctness is achieved by a depth-first search algorithm that checks
whether those temporal logic formulas hold of the state transition
system.

Axiomatic systems and state transition systems used to evaluate
whether the executions of the represented computational artifacts
conform or do not conform with the behaviors prescribed by their
specifications can be understood as theories of the
represented systems in that they are used to predict and explain the
future behaviors of those systems. In particular, state transition
systems in model checking can be compared, on a methodological basis,
with scientific models in empirical sciences (Angius & Tamburrini
2011). For instance, Kripke Structures are in compliance with
Suppes’ (1960) definition of scientific models as set-theoretic
structures establishing proper mapping relations with models of data
collected by means of experiments on the target empirical system (see
also the entry on
models in science).
A Kripke Structure (M = (S), (S_0), (R, L)) is a set-theoretic
model composed of a non-empty set of states (S), together with a
non-empty set of initial states (S_0), a total state transition
relation (R subseteq S times S), and a function (L: S rightarrow
2^{textit{AP}}) labeling each state in (S) with subsets of a set
of atomic propositions AP.

Kripke Structures and other state transition systems utilized in
formal verification methods are often called system specifications.
They are distinguished from common specifications, also called
property specifications. The latter specify some required behavioral
properties the artifact to be encoded must instantiate, while the
former specify (in principle) all potential executions of an already
encoded program, thus allowing for algorithmic checks on its traces
(Clarke et al. 1999). In order to achieve this goal, system
specifications are to be considered as abductive structures
hypothesizing the set of potential executions of a target
computational artifact on the basis of the program’s code and
the allowed state transitions (Angius 2013b). Indeed, once some
temporal logic formula has been checked to hold or not to hold of the
modeled Kripke Structure, the represented program is empirically
tested against the behavioral property corresponding to the checked
formula to evaluate whether the model-hypothesis is an adequate
representation of the target artifact. Accordingly, property
specifications and system specifications differ also in their
intentional stance (Turner 2011): Property specifications are
requirements (on) the program to be encoded, system specifications
are (hypothetical) descriptions (of) the encoded program. The
descriptive and abductive character of state transition systems in
model checking is an additional and essential feature putting state
transition systems on a par with scientific models.

6.2 Testing and Experiments

The so-called “agile methods” in software development make
extensive use of software testing to evaluate the dependability of the
implemented computational artifacts. Testing is the more
“empirical” process of launching a program and observing
its executions to evaluate whether they comply or do not comply with
the supplied property specifications. Philosophers and
philosophically-minded computer scientists analyzed the software
testing techniques under the light of traditional methodological
approaches in scientific discovery (Snelting 1998; Gagliardi 2007;
Northover et al. 2008; Angius 2014) and questioned whether software
tests can be acknowledged as scientific experiments
evaluating the correctness of programs (Schiaffonati & Verdicchio
2014; Schiaffonati 2015; Tedre 2015).

Dijkstra’s well-known dictum “Program testing can be used
to show the presence of bugs, but never to show their absence”
(Dijkstra 1970: 7), introduces Popper’s (1959) principle of
falsifiability into computer science (Snelting 1998). Testing
a program against an advanced property specification for a given
interval of time, may exhibit some failures but if no failure is
executed while observing the running program, one cannot conclude that
the program is correct. An incorrect execution might be observed at
the very next system’s test. The reason is that testers can only
launch the program with a finite subset of the potential
program’s input set and for a finite interval of time;
accordingly, not all potential executions of the artifact to be tested
can be empirically observed. For this reason, the aim of software
testing is to detect programs’ faults and not to assure for
their absence (Ammann & Offutt 2008: 11). A program is falsifiable
in that tests can reveal them (Northover et al. 2008). Given a
computational artifact and a property specification, a test is akin to
a scientific experiment which, by observing the system’s
behaviors, tries to falsify the hypothesis that the program is correct
with respect to the interested specification.

However, one should be careful to note that other methodological and
epistemological traits characterizing scientific experiments are not
shared by software tests. A first methodological distinction can be
recognized in that a falsifying test leads to the revision of the
artifact, not of the hypothesis, as in the case of testing scientific
hypotheses. This is due to the difference in the intentional stance of
specifications and empirical hypotheses in science (Turner 2011).
Specifications are requirements whose violation demands for program
revisions until the program becomes a correct instantiation of the
specifications.

Accordingly, the notion of scientific experiments, as it has been
traditionally examined by the philosophy of empirical sciences, needs
to be somehow “stretched” in order to be applied to
software testing activities (Schiaffonati 2015).
Theory-driven experiments, characterizing most of
experimental sciences, find no counterpart in actual computer science
practice. Indeed, if one excludes the cases wherein testing is
combined with formal methods, most experiments performed by software
engineers are rather explorative. An experiment is
explorative when it is aimed at “exploring”

the realm of possibilities pertaining to the functioning of an
artefact and its interaction with the environment in the absence of a
proper theory or theoretical background. (Schiaffonati 2015: 662)

Software testers often do not have theoretical control on the
experiments they perform; exploration on the behaviors of the
artifacts interacting with users and environments rather provides
testers with theoretical generalizations on the observed behaviors.
Explorative experiments in computer science are also characterized by
the fact that programs are often tested in a real-like environment
wherein testers play the role of users. However, it is an essential
feature of theory-driven experiments that experimenters do not take
part in the experiment to be carried out.

As a result, some software testing activities are closer to the
experimental activities one finds in empirical sciences, some others
rather define a new typology of experiment that turns out to belong to
the software development process. Five typologies of experiments can
be distinguished in the process of specifying, implementing, and
evaluating computing artifacts (Tedre 2015). Feasibility
experiments
are performed to evaluate whether an artifact of
interest performs the functions specified by users and stakeholders;
trial experiments are more specific experiments carried out
to evaluate isolated capabilities of the system given some set of
initial conditions; field experiment are performed in real
environments and not in simulated ones; comparison
experiments
test similar artifacts, instantiating in different
ways the same function, to evaluate which instantiation better
performs the desired function both in real-like and real environments;
finally,controlled experiments are used to appraise advanced
hypotheses on the behaviors of the testing artifact. Only controlled
experiments are on a par with scientific theory-driven experiments in
that they are carried out on the basis of some theoretical hypotheses
under evaluation.

6.3 Explanation

A software test is considered successful when miscomputations are
detected (assuming that no computational artifact is 100% correct).
The successive step is to find out what caused the execution to be
incorrect rather than correct, that is, to trace back the fault (more
familiarly named “bug”), before proceeding to the
debugging phase and then testing the system again. In other words, an
explanation of the observed miscomputation is to be
advanced.

Efforts have been spent in analyzing explanations in computer science
(Piccinini 2007; Piccinini & Craver 2011; Piccinini 2015; Angius
& Tamburrini forthcoming) in relation to the different models of
explanations elaborated in the philosophy of science. In particular,
computational explanations can be understood as a specific kind of
mechanist explanations (Glennan 1996; Machamer et al. 2000;
Bechtel & Abrahamsen 2005), insofar as computing processes can be
analyzed as mechanisms (Piccinini 2007, 2015; see also the entry on
computation in physical systems).
A mechanism can be defined in terms of “entities and activities
organized such that they are productive of regular changes from start
or set-up to finish or termination condition” (Machamer et al.
2000: 3), in other words, as a set of components, their functional
capabilities, and their organization enabling them to bring about an
empirical phenomenon. And a mechanistic explanation of such a
phenomenon turns out to be the description of the mechanism that
brings about that phenomenon, that is, the description of the involved
components and functional organization. A computing mechanism is
defined as a mechanism whose functional organization brings about
computational processes. A computational process is to be understood
here, in general terms, as a manipulation of strings, leading from
input strings to output strings by means of operations on intermediate
strings.

Consider a processor executing an instruction. The involved process
can be understood as a mechanism whose components are state and
combinatory elements in the processor instantiating the functions
prescribed by the relevant hardware specifications (specifications for
registers, for the Arithmetic Logic Unit etc.), organized in such a
way that they are capable of carrying out the observed execution.
Accordingly, providing the description of such a mechanism or, in
other words, describing the functional organization of hardware
components, counts as advancing a mechanist explanation of the
observed computation, such as the explanation of an operational
malfunction.

For every type of miscomputation defined in
§7.5,
a corresponding mechanist explanation can be defined at the adequate
level of abstraction and with respect to the set of specifications
characterizing that level of abstraction. Indeed, abstract
descriptions of mechanisms still supply one with a mechanist
explanation in the form of a mechanism schema, defined as
“a truncated abstract description of a mechanism that can be
filled with descriptions of known component parts and
activities” (Machamer et al. 2000: 15). For instance, suppose
the very common case in which a machine miscomputes by executing a
program containing syntax errors, called slips
§7.5.
The computing machine is unable to correctly implement the functional
requirements provided by the program specifications. However, for
explanatory purposes, it would be redundant to provide an explanation
of the occurred slip at the hardware level of abstraction, by
advancing the detailed description of the hardware components and
their functional organization. In such cases, a satisfactory
explanation may consist in showing that the program’s code is
not a correct instantiation of the provided program specifications
(Angius & Tamburrini forthcoming). In these cases, in order to
explain mechanistically an occurred miscomputation, it may be
sufficient to provide the description of the incorrect program,
abstracting from the rest of the computing mechanism (Piccinini &
Craver 2011). Abstraction is a virtue not only in software development
and specification, but also in the explanation of computational
artifacts’ behaviors.

7. Correctness

One of the earliest philosophical disputes in computer science centers
upon the nature of program correctness. The overall dispute was set in
motion by two papers (De Millo et al. 1979; Fetzer 1988) and was
carried on in the discussion forum of the ACM (e.g., Ashenhurst 1989;
Technical Correspondence 1989). The pivotal issue derives from the
duality of programs, and what exactly is being claimed to be correct
relative to what. Presumably, if a program is taken to be a
mathematical thing, then it has only mathematical properties. But seen
as a technical artifact it has physical ones.

7.1 Mathematical Correctness

On the face of it, Hoare seems to be committed to what we shall call
the mathematical perspective, i.e., that correctness is a
mathematical affair; i.e., establishing that a program is correct
relative to a specification involves only a mathematical proof.

Computer programming is an exact science in that all the properties of
a program and all the consequences of executing it in any given
environment can, in principle, be found out from the text of the
program itself by means of purely deductive reasoning. (Hoare 1969:
576)

Consider our specification of a square root function. What does it
mean for a program (P) to satisfy it? Presumably, relative to its
abstract semantics, every program ((P)), carves out a relationship
(R_P) between its input and output, its extension. The correctness
condition insists that this relation satisfies the above
specification, i.e.,

  • (C) ( forall x:
    textit{Real}. forall y:textit{Real}cdot x ge 0 rightarrow
    (R_P(x, y) rightarrow y* y = x textrm{ and } y ge 0))

This demands that the abstract program, determined by the semantic
interpretation of its language, satisfies the specification. The
statement (C) is a mathematical assertion between two abstract objects
and so, in principle, the correctness maybe established
mathematically. A mathematical relationship of this kind is surely
what Hoare has in mind, and in terms of the abstract guise of the
program, there is little to disagree with. However, there are several
concerns here. One has to do with the complexity of modern software
(the complexity challenge), and the other the nature of
physical correctness (the empirical challenge).

7.2 The Complexity Challenge

Programmers are always surrounded by complexity; we cannot avoid it.
Our applications are complex because we are ambitious to use our
computers in ever more sophisticated ways. Programming is complex
because of the large number of conflicting objectives for each of our
programming projects. If our basic tool, the language in which we
design and code our programs, is also complicated, the language itself
becomes part of the problem rather than part of its solution. (Hoare
1981: 10)

Within the appropriate mathematical framework, proving the correctness
of any linguistic program, relative to its specification, is
theoretically possible. However, real software is complex. In such
cases, proving correctness might be infeasible practically. One might
attempt to gain some ground by advocating that classical correctness
proofs should be carried out by a theorem prover, or at least one
should be employed somewhere in the process. However, the latter must
itself be proven correct. While this may reduce the correctness
problem to that of a single program, it still means that we are left
with the correctness problem for a large program. Moreover, in itself
this does not completely solve the problem. For both theoretical and
practical reasons, in practice, human involvement is not completely
eliminated. In most cases, proofs are constructed by hand with the aid
of interactive proof systems. Even so, a rigorous proof of correctness
is rarely forthcoming. One might only require that individual
correctness proofs be checked by a computer rather than a human. But
of course the proof-checker is itself in need of checking. Arkoudas
and Bringsjord (2007) argue that since there is only one correctness
proof that needs to be checked, namely that of the proof checker
itself, then the possibility of mistakes is significantly reduced.

This is very much a practical issue. However, there is a deeper
conceptual one. Are proofs of program correctness genuine mathematical
proofs, i.e., are such proofs on a par with standard mathematical
ones? (De Millo et al. 1979) claim that correctness proofs are unlike
proofs in mathematics. The latter are conceptually interesting,
compelling and attract the attention of other mathematicians who want
to study and build upon them. This argument parallels the graspability
arguments made in the philosophy of mathematics. Proofs that are long,
cumbersome, and uninteresting cannot be the bearers of the kind of
certainty that is attributed to standard mathematical proofs. The
nature of the knowledge obtained from correctness proofs is said to be
different to the knowledge that may be gleaned from standard proofs in
mathematics. In order to be taken in, proofs must be graspable.
Indeed, Wittgenstein would have it that proofs that are not graspable
cannot act as norms, and so are not mathematical proofs (Wittgenstein
1956).

Mathematical proofs such as the proof of Gödel’s
incompleteness theorem are also long and complicated. But they can be
grasped. What renders such complicated proofs transparent,
interesting, and graspable involves the use of modularity techniques
(e.g., lemmas), and the use of abstraction in the act of mathematical
creation. The introduction of new concepts enables a proof to be
constructed gradually, thereby making the proofs surveyable.
Mathematics progresses by inventing new mathematical concepts that
facilitate the construction of proofs that would be far more complex
and even impossible without them. Mathematics is not just about proof;
it also involves the abstraction and creation of new concepts and
notation. In contrast, formal correctness proofs do not seem to
involve the creation of new concepts and notations. While computer
science does involve abstraction, it is not quite in the same way.

One way of addressing the complexity problem is to change the nature
of the game. The classical notion of correctness links the formal
specification of programs to its formal semantic representation. It is
at one end of the mathematical spectrum. However, chains of
specification-artifact pairings, positioned at varying degrees of
abstraction, are governed by different notions of correctness. For
example, in the object-oriented approach, the connection between a UML
specification and a Java program is little more than type checking.
The correctness criteria involve structural similarities and
identities (Gamma et al. 1994). Here, we do not demand that one
infinite mathematical relation is extensionally governed by another.
At higher levels of abstraction, we may have only connections of
structure. These are still mathematical relationships. However, such
methods, while they involve less work, and may even be automatically
verified, establish much less.

7.3 The Empirical Challenge

The notion of program verification appears to trade upon an
equivocation. Algorithms, as logical structures, are appropriate
subjects for deductive verification. Programs, as causal models of
those structures, are not. The success of program verification as a
generally applicable and completely reliable method for guaranteeing
program performance is not even a theoretical possibility. (Fetzer
1988: 1)

In fact, this issue is alluded to by Hoare in the very text that
Fetzer employs to characterize Hoare’s mathematical stance on
correctness.

When the correctness of a program, its compiler, and the hardware of
the computer have all been established with mathematical certainty, it
will be possible to place great reliance on the results of the
program, and predict their properties with a confidence limited only
by the reliability of the electronics. (Hoare 1969: 579)

All seemed to be agreed that computational systems are at bottom
physical systems, and some unpredictable behavior may arise because of
the causal connections. Indeed, even when theorem provers and proof
checkers are used, the results still only yield empirical knowledge. A
proof checker is a program running on a physical machine. It is a
program that has been implemented and its results depend upon a
physical computation. Consequently, at some level, we shall need to
show that some physical machine operations meet their specification.
Testing and verification seem only to yield empirical evidence.
Indeed, the complexity of program proving has led programmers to take
physical testing to be evidence that the abstract program meets its
specification. Here, the assumption is that the underlying
implementation is correct. But prima facie, it is only
empirical evidence.

In apparent contrast, Burge (1998) argues that knowledge of such
computer proofs can be taken as a priori knowledge. According
to Burge, a priori knowledge does not depend for its
justification on any sensory experience. However, he allows that a
priori
knowledge may depend for its possibility on sensory
experience; e.g., knowledge that red is a color may be a
priori
even though having this knowledge requires having sensory
experience of red in order to have the concepts required to even
formulate the idea. If correct, this closes the gap between a
priori
and a posteriori claims about computer-assisted
correctness proofs, but only by redrawing the boundary between a
priori
and a posteriori knowledge so that some empirical
assertions can fall into the former category. For more discussion on
the nature of the use of computers in mathematical proofs, see Hales
2008; Harrison 2008; Tymoczko 1979, 1980.

Unfortunately, practice often does not even get this far. Generally,
software engineers do not construct classical correctness proofs by
hand or even automatically. Testing of software against its
specification on suites of test cases is the best that is normally
achieved. Of course, this never yields correctness in the mathematical
sense. Test cases can never be exhaustive (Dijkstra 1974).
Furthermore, there is a hidden assumption that the underlying
implementation is correct: at best, these empirical methods tell us
something about the whole system. Indeed, the size of the state space
of a system may be so large and complex that even direct testing is
infeasible. In practice, the construction of mathematical models that
approximate the behavior of complex systems is the best we can do.

The whole correctness debate carried out in the forum of the ACM
(e.g., Ashenhurst 1989; Technical Correspondence 1989) is put into
some perspective when programs are considered as technical artifacts.
But this leaves one further topic: When we have reached physical
structure, what notion of correctness operates?

7.4 Physical Correctness

What is it for a physical device to meet its specification? What is it
for it to be a correct physical implementation? The starting
point for much contemporary analysis is often referred to as the
simple mapping account.

According to the simple mapping account, a physical system (S)
performs as a correct implementation of an abstract specification
(C) just in case (i) there is a mapping from the states ascribed to
(S) by a physical description to the states defined by the abstract
specification (C), such that (ii) the state transitions between the
physical states mirror the state transitions between the abstract
states. Clause (ii) requires that for any abstract state transition of
the form (s_1 rightarrow s_2), if the system is in the physical
state that maps onto (s_1), it then goes into the physical state
that maps onto (s_2).

To illustrate what the simple mapping account amounts to, we consider
the example of our abstract machine
(§2.1)
where we employ an instance of the machine that has only two
locations (l) and (r), and two possible values 0 and 1.
Subsequently, we have only four possible states (0, 0), (0, 1), (1,
1), and (1, 0). The computation table for the update operation may be
easily computed by hand, and takes the form of a table with
input-output pairings. For example, Update((r,1)) sends the
state (0,0) the state (0,1). The simple mapping account only demands
that the physical system can be mapped onto the abstract one in such a
way that the abstract state transitions are duplicated in the physical
version.

Unfortunately, such a device is easy to come by: Almost anything with
enough things to play the role of the physical states will satisfy
this quite weak demand of what it is to be an implementation. For
example, any collection of colored stones arranged as the update table
will be taken to implement the table. The simple mapping account only
demands extensional agreement. It is a de-facto demand. This leads to
a form of pancomputationalism where almost any physical system
implements any computation.

The danger of pancomputationalism has driven some authors (D.J.
Chalmers 1996; Egan 1992; Sprevak 2012) to attempt to provide an
account of implementation that somehow restricts the class of possible
interpretations. In particular, certain authors (D.J. Chalmers 1996;
Copeland 1996) seek to impose causal constraints on such
interpretations. One suggestion is that we replace the material
conditional (if the system is in the physical state (S_1)
…) by a counterfactual one. In contrast, the semantic account
insists that a computation must be associated with a semantic aspect
which specifies what the computation is to achieve (Sprevak 2012). For
example, a physical device could be interpreted as an AND gate or an
OR gate. It would seem to depend upon what we take to be the
definition of the device. Without such there is no way of fixing what
the artifact is. The syntactic account demands that only physical
states that qualify as syntactic may be mapped onto computational
descriptions, thereby qualifying as computational states. If a state
lacks syntactic structure, it is not computational. Of course, what
remains to be seen is what counts as a syntactic state. A good
overview can be found in (Piccinini 2015; see also the entry on
computation in physical systems).

Turner (2012) argues that abstract structure and physical structure
are linked, not just by being in agreement, but also by the intention
to take the former as having normative governance over the latter. On
this account, computations are technical artifacts whose function is
fixed by an abstract specification. This relationship is neither that
of theory to physical object nor that of syntactic thing to semantic
interpretation.

But there is an ambiguity here that is reflected in the debate between
those who argue for semantic interpretation (Sprevak 2012), and those
who argue against it (Piccinini 2008). Consider programs. What is the
function of a program? Is it fixed by its semantic interpretation, or
is it fixed by its specification? The ambiguity here concerns the
function of a program as part of a programming language or its role as
part of a larger system. As a program in a language, it is fixed by
the semantics of the language as a whole. However, to use a program as
part of a larger system, one only needs to know what it does. The
function of the program, as part of a larger system, is given by its
specification. When a computation is picked out by a specification,
exactly how the program achieves its specification is irrelevant to
the system designer. The specification acts as an interface,
and the level of abstraction employed by the system designer is
central.

7.5 Miscomputations

It follows from what has been said so far, that correctness of
implemented programs does not automatically establish the
well-functioning of a computational artifact. Turing (1950) already
distinguished between errors of functioning and errors of
conclusion
. The former are caused by a faulty implementation that
is unable to execute the instructions of some high-level language
program. Errors of conclusion characterize correct abstract machines
that nonetheless fail to carry out the tasks they were supposed to
accomplish. This may happen in those cases in which the specifications
a program is correctly instantiating do not properly express
users’ requirements on such a program. In both cases, machines
implementing correct programs can still be said to miscompute.

Turing’s distinction between errors of functioning and errors of
conclusion has been expanded into a complete taxonomy of
miscomputations (Fresco & Primiero 2013). The provided
classification is established on the basis of the many different
levels of abstraction one may identify in the software development
process. The functional specification level refers to the functional
requirements a computational artifact should fulfill and which are
advanced by users, companies, software architects, or other general
stakeholders expressing constraints on the allowed behaviors of the
system to be realized. At the design specification level, those
requirements are more formally expressed in terms of a system design
description detailing the system’s states and the conditions
allowing for transitions among those states. A design specification
level specification is, in its turn, instantiated in a proper
algorithm, usually using some high-level programming language, at the
algorithm design level. At the algorithm implementation level,
algorithms can be implemented either in software, by means of assembly
language and machine code instructions, or directly in hardware, the
latter being the case for many special purpose machines. Finally, the
algorithm execution level refers to runtime executions.

Errors can be conceptual, material, and
performable. Conceptual errors violate validity conditions
requiring consistency for specifications expressed in propositional
conjunctive normal form; material errors violate the correctness
requirements of programs with respect to the set of their
specifications; and performable errors arise when physical constraints
are breached by some faulty implementing hardware.

Performable errors clearly emerge only at the algorithm execution
level, and they correspond with Turing’s (1950) error of
functioning, also called operational malfunctions. Conceptual
and material errors may arise at any level of abstraction from
functional specification level down to the algorithm implementation
level. Conceptual errors engender mistakes, while material
errors can induce failures. For instance, a mistake at the
functional specification level consists of an inconsistent set of
requirements, or at the algorithm implementation level it may
correspond to an invalid hardware design (such as in the choice of the
logic gates for the truth-functional connectives). And failures
occurring at the design specification level may be due to a design
that is deemed to be incomplete with respect to the set of functional
requirements expressed at the functional specification level while a
failure at the algorithm design level occurs in those frequent cases
in which a program is found not to fulfill its specifications. Beyond
mistakes, failures, and operational malfunctions, slips are a
source of miscomputations at the algorithm implementation level. Slips
may be conceptual or material errors due to, respectively, a syntactic
or a semantic flaw in the software implementation of algorithms.
Conceptual slips appear in all those cases in which the syntactical
rules of the programming languages are violated; material slips
involve the violation of the semantic rules of programming languages,
such as when a variable is used but not initialized.

Abstract machines […] are incapable of errors of functioning.
In this sense we can truly say that “machines can never make
mistakes”. Errors of conclusion can only arise when some meaning
is attached to the output signals from the machines. (Turing 1950:
449)

On the basis of Turing’s remark, a distinction can be made
between dysfunctions and misfunctions of technical
artifacts (Floridi, Fresco, & Primiero 2015). Software can only
misfunction but cannot ever dysfunction. An artifact token
dysfunctions when it is not able to perform the task(s) it was
designed for; and an artifact token misfunctions in case it is able to
perform the required task(s) but is prone to manifest some undesired
side-effects.

Software development is characterized by more levels of abstraction
than one can find in any other artifact’s production cycle.
Typical artifacts’ production only involves functional
specification level and design specification level; after design,
technical artifacts are physically implemented. As seen above,
software development is also characterized by the algorithm
implementation level, that is, the designed algorithm has to be
instantiated in some high-level language program before hardware
implementation. An artifact token can dysfunction in case the physical
implementation fails to satisfy functional specifications or design
specifications. Dysfunctions only apply to single tokens since a token
dysfunctions in that it does not behave as the other tokens of the
same type do with respect to the implemented functions. For this
reason, dysfunctions do not apply to functional specification level
and design specification level. On the contrary, both artifacts types
and tokens can misfunction, since misfunctions do not depend on
comparisons with tokens of the same type being able to perform some
implemented function or not. Misfunction of tokens usually depends on
the dysfunction of some other component, while misfunction of types is
often due to poor design.

A software token cannot dysfunction, because all tokens of a given
type implement functions specified at functional specification level
and design specification level in the very same way. This is due to
the fact that those functions are implemented at algorithm
implementation level before being performed at the algorithm execution
level; in case of correct implementation, all tokens will behave
correctly at the algorithm execution
level (provided that no operational malfunction occurs).
For the very same reason, software tokens cannot misfunction, since
they are equal implementations of the same design and specifications
at algorithm implementation level. Only software types can misfunction
in case of poor design; misfunctioning software types are able to
correctly perform their functions but may also produce some undesired
side-effect.

8. Abstraction

Abstraction facilitates computer science. Without it we would not have
progressed from the programming of numerical algorithms to the
software sophistication of air traffic control systems, interactive
proof development frameworks, and computer games. It is manifested in
the rich type structure of contemporary programming and specification
languages, and underpins the design of these languages with their
built-in mechanisms of abstraction. It has driven the invention of
notions such as polymorphism, data abstraction, classes, schema,
design patterns, and inheritance. But what is the nature of
abstraction in computer science? Is there just one form of it? Is it
the same notion that we find in mathematics?

8.1 Abstraction in Computer Science

Computer science abstraction takes many different forms. We shall not
attempt to describe these in any systematic way here. However, Goguen
(Goguen & Burstall 1985) describes some of this variety of which
the following examples are instances.

One kind involves the idea of repeated code: A program text, possibly
with a parameter, is given a name (procedural abstraction).
In Skemp’s terms, the procedure brings a new concept into
existence, where the similarity of structure is the common code.
Formally, this is the abstraction of the lambda calculus (see the
entry on the
lambda calculus).
The parameter might even be a type, and this leads to the various
mechanisms of polymorphism, which may be formalized in mathematical
theories such as the second order lambda calculus (Hankin 2004).

Recursion is an early example of operation or mechanism abstraction:
It abstracts away from the mechanisms of the underlying machine. In
turn, this facilitates the solution of complex problems without having
to be aware of the operations of the machine. For example, recursion
is implemented in devices such as stacks, but in principle the user of
recursion does not need to know this.

The type structure of a programming or specification language
determines the ontology of the language: the kinds of entity that we
have at our disposal for representation and problem solving. To a
large extent, types determine the level of abstraction of the
language. A rich set of type constructors provides an expressive
system of representation. Abstract and recursive types are common
examples.

In object-oriented design, patterns (Gamma et al. 1994) are abstracted
from the common structures that are found in software systems. Here,
abstraction is the means of interfacing: It dissociates the
implementation of an object from its specification. For example,
abstract classes act as interfaces by providing nothing but the type
structure of its methods.

In addition, in mathematics (Mitchelmore & White 2004), computer
science, and philosophy (Floridi 2008) there are levels of
abstraction. Abstractions in mathematics are piled upon each other in
a never-ending search for more and more abstract concepts. Likewise,
computer science deals with the design and construction of artifacts
through a complex process involving sequences of artifacts of
decreasing levels of abstractness, until one arrives at the actual
physical device.

8.2 Information Hiding

In mathematics, once the abstraction is established, the physical
device is left behind. On this account, the abstraction is
self-contained: An abstract mathematical object takes its meaning only
from the system within which it is defined. The only constraint is
that the new objects be related to each other in a consistent system
that can be operated on without reference to their previous meaning.
Self-containment is paramount. There are no leaks.

Some argue that, in this respect at least, abstraction in computer
science is fundamentally different to abstraction in mathematics
(Colburn & Shute 2007). They claim that computational abstraction
must leave behind an implementation trace. Information is hidden but
not destroyed. Any details that are ignored at one level of
abstraction (e.g., programmers need not worry about the precise
location in memory associated with a particular variable) must not be
ignored by one of the lower levels of abstraction (e.g., the virtual
machine handles all memory allocations). At all levels, computational
artifacts crucially depend upon the existence of an implementation.
For example, even though classes hide the implementation details of
their methods, except for abstract ones, they must have
implementations. This is in keeping with the view that computational
artifacts have both function and structure: Computational abstractions
have both an abstract guise and an implementation.

However, matters are not quite so clean cut. While it is true that
abstraction in mathematics generates objects whose meaning is defined
by their relationships, the same is so in computer science. Abstract
notions could not have a normative function unless they had such
independent meanings. Moreover, certain forms of
constructive mathematics
resembles computer science in that there has to be an implementation
trace: one must always be able to recover implementation information
from proofs by reading between the lines. Of course, this is
not the case for classical mathematics.

Moreover, many would argue that mathematical abstractions do not
completely leave behind their physical roots.

One aspect of the usefulness of mathematics is the facility with which
calculations can be made: You do not need to exchange coins to
calculate your shopping bill, and you can simulate a rocket journey
without ever firing one. Increasingly powerful mathematical theories
(not to mention the computer) have led to steady gains in efficiency
and reliability. But a calculational facility would be useless if the
results did not predict reality. Predictions are successful to the
extent that mathematical models appropriate aspects of reality and
whether they are appropriate can be validated by experience.
(Mitchelmore & White 2004: 330)

How is it that the axiomatic method has been so successful in this
way? The answer is, in large part, because the axioms do indeed
capture meaningful and correct patterns. … There is nothing to
prevent anyone from writing down some arbitrary list of postulates and
proceeding to prove theorems from them. But the chance of those
theorems having any practical application [is] slim indeed. …
Many fundamental mathematical objects (especially the more elementary
ones, such as numbers and their operations) clearly model reality.
Later developments (such as combinatorics and differential equations)
are built on these fundamental ideas and so also reflect reality even
if indirectly. Hence all mathematics has some link back to reality.
(Devlin 1994: 54–55)

If would appear that the difference between abstraction in computer
science and abstraction in mathematics is not so sharp. However, there
appears to be an important conceptual difference. If Turner (2011) is
right, in computer science, the abstract partner is the dominant one
in the relationship: It determines correctness. In the case of
(applied) mathematics, things are reversed: The mathematics is there
to model the world, and it must model it accurately. In computer
science, the relationship between the abstraction and its source is
the specification-artifact relationship; in mathematics, it is
between, on the one hand, model or theory, and, on the other hand,
reality. When things go wrong the blame is laid at a different place:
with the artifact in computer science but with the model in
mathematics.

9. The Epistemological Status of Computer Science

The problem of defining the epistemological status of computer science
arose as soon as computer science became an independent discipline,
distinct from mathematics, between the 1960s and the 1970s (Tedre
2011). Since the 1970s it has been clear that computer science has to
be considered partially as a mathematical discipline, partially as a
scientific discipline, and partially as an engineering discipline,
insofar as it makes use of mathematical, empirical, and engineering
methods (Tedre & Sutien 2008). Nonetheless, a debate took place
concerning whether computer science has to be mostly
considered as a mathematical discipline, a branch of engineering, or
as a scientific discipline.

9.1 Computer Science as a Mathematical Discipline

Each epistemological characterization of computer science is based on
ontological, methodological, and epistemological commitments, that is,
on assumptions about the nature of computational artifacts, the
methods involved in the software development process, and the kind of
reasoning thereby involved, whether deductive, inductive, or a
combination of them (Eden 2007).

Holders of the mathematical nature of computer science assume that
programs are mathematical entities about which one can pursue purely
deductive reasoning provided by the formal methods of theoretical
computer science. As examined in
§4.2
and
§5.1,
Dijkstra (1974) and Hoare (1986) were very explicit in stating that
programs’ instructions can be acknowledged as mathematical
sentences and how a formal semantics for programming languages can be
given in terms of an axiomatic system (Hoare 1969). Provided that
program specifications be advanced in a formal language, and provided
that a program’s code be represented in the same formal
language, formal semantics provide a means by which to prove
correctness. Accordingly, knowledge about behaviors of computational
artifacts is acquired by the deductive reasoning involved in
mathematical proofs of correctness.

The reason at the basis of such a rationalist optimism (Eden 2007)
about what can be known about computing systems is that they are
artifacts, that is, human-made systems and that, as such, one
can predict their behaviors with certainty (Knuth 1974b).

The original motivation for a mathematical analysis of computation
came from mathematical logic. Its origins are to be found in
Hilbert’s question concerning the decidability of predicate
calculus (Hilbert & Ackermann 1928): could there be an
algorithm, a procedure, for deciding of an arbitrary sentence of the
logic whether it is provable (The Entscheidungsproblem)? In
order to address this question, a rigorous model of the informal
concept of an effective or mechanical method in logic and mathematics
was required. Providing this is first and foremost a mathematical
endeavor: one has to develop a mathematical analogue of the informal
notion.

Although a central concern of theoretical computer science, the topics
of computability and complexity are covered in existing entries on the
“Church-Turing thesis”,
“computational complexity theory”, and
“recursive functions”.

9.2 Computer Science as an Engineering Discipline

In the 1970s, the growing complexity of programs, the increasing
number of applications of software systems in everyday contexts, and
the consequent booming of market demands caused a deviation of
interests of computer scientists, both academics and practitioners,
from proofs of programs’ correctness to methods managing the
complexity of those systems and evaluating their reliability (Wegner
1976). Indeed, providing formal specifications of modular programs,
representing highly complex programs in the same formal language, and
providing inputs for systems that are often embedded and interacting
with users is practically impossible. It turned out that providing
mathematical proofs of correctness was mostly unfeasible. Computer
science research rather developed toward testing techniques able to
provide a statistical evaluation of correctness, often called
reliability (Littlewood & Strigini 2000), in terms of estimations
of distributions of errors in a program’s code.

Computer science evaluates the reliability of computing systems in the
same way that civil engineering does for bridges or that aerospace
engineering does for airplanes (DeMillo et al. 1979). In particular,
whereas empirical sciences examine what exists, computer science
focuses on what can exist, that is, on how to produce
artifacts, and it should be therefore acknowledged as an
“engineering of mathematics” (Hartmanis 1981). Similarly,
whereas scientific inquiries are involved in discovering laws
concerning the studied phenomena, one cannot identify proper laws in
computer science practice, insofar as the latter is rather involved in
the production of the phenomena to be studied, that is, those
concerning computational artifacts (Brooks 1996).

9.3 Computer Science as a Scientific Discipline

Software testing and reliability measuring techniques are nonetheless
known for their incapability of assuring for the absence of code
faults (Dijkstra 1970). In many cases, and especially in the
evaluation of the so-called safety-critical systems (such as
controllers of airplanes, rockets, nuclear plants etc.), both formal
methods and empirical testing are used to evaluate the correctness and
the dependability of computational artifacts. Computer science can
accordingly be understood as a scientific discipline in that it makes
use of both deductive and inductive probabilistic reasoning to examine
computational artifacts (Denning et al. 1981; Denning 2005, 2007;
Tichy 1998; Colburn 2000). Indeed, as examined in
§6,
verification and testing methods are often jointly involved in
advancing hypotheses on the behaviors of implemented computing
systems, and providing evidence (either algorithmically or
empirically) in support of those hypotheses.

The thesis that computer science is, on a methodological viewpoint, on
a par with empirical sciences traces back to Newell, Perlis, and
Simon’s 1967 letter to Science (Newell et al. 1967) and
dominated all the 1980s (Wegner 1976). In the 1975 Turning award
lecture, Newell and Simon argued:

Computer science is an empirical discipline. We would have called it
an experimental science, but like astronomy, economics, and geology,
some of its unique forms of observation and experience do not fit a
narrow stereotype of the experimental method. Nonetheless, they are
experiments. Each new machine that is built is an experiment. Actually
constructing the machine poses a question to nature; and we listen for
the answer by observing the machine in operation and analyzing it by
all analytical and measurement means available. (Newell & Simon
1976: 114)

Since Newell and Simon’s Turing award lecture, it has been clear
that computer science can be understood as an empirical science but of
a special sort, and this is related to the nature of experiments in
computing. Indeed, much current debate on the epistemological status
of computer science concerns the problem of defining what kind of
science it is (Tedre 2011) and, in particular, on the nature of
experiments in computer science (Schiaffonati & Verdicchio 2014),
on the nature, if any, of laws and theorems in computing (Hartmanis
1993; Rombach & Seelish 2008), and on the methodological relation
between computer science and software engineering (Gruner 2011).

10. Computer Ethics

Computer ethics is the analysis of the nature and social impact of
computer technology and the corresponding formulation and
justification of policies for the ethical use of such technology.
(Moor 1985: 266)

Computer ethics is a subfield of information ethics concerning
ethical, social, and political issues arising from the widespread
application of information technologies (for an analysis of both
computer ethics and information ethics see the entry on
computer and information ethics).
Computer ethics has its roots in Norbert Wiener’s book
Cybernetics (1948) and rapidly developed as an urgent and
prominent subfield of applied ethics (see Bynum 2008 for an overview
of computer ethics’s historical development). Interestingly, in
Wiener’s book God and Golem (1964) most of the
currently discussed topics of computer ethics were already put
forward, such as security, responsibilities of programmers, and
information networks. Other issues include privacy, and social
networks, software ownership, to mention some.

Computer ethics developed as an independent discipline, distinct from
both applied ethics and the philosophy of computer science. In this
section two topics in computer ethics are analyzed, since the
philosophy of computer science provides a rather different perspective
on them. In particular, the ontology of software systems affects the
debate about property rights over programs, and the methodology of
software development helps in clarifying and distinguishing the moral
responsibilities of developers.

10.1 Intellectual Property Rights on Computational Artifacts

One of the main and ongoing debates in computer ethics concerns the
ethical, social, and legal aspects of software ownership, and deals
with the problems of whether programmers and software companies can
exert intellectual property rights over computational artifacts, of
how such ownership can be protected, i.e., whether by copyright or
patent, whether and to what extent copyright or patent system should
allow for reuse or copying of source code, and whether software should
be free and not copyrighted.

Three main arguments have been advanced arguing that property should
be extended also to intellectual entities and should not be restricted
only to physical goods (Moore 2001, 2008). The
“personality-based” argument harks back to Hegel’s
Philosophy of Right to maintain that the products of physical or
intellectual labor are an actualization of the laborer’s
feelings, character, and abilities. Insofar as feelings, character,
and abilities are owned by the laborer, any externalization of them in
an intellectual product, be it a poem, a song, or a computer program,
is owned by the laborer (Moore 2008: 108–110). Critics of the
personality-based argument claim that the externalization of
authors’ feelings and abilities transfers use rights over
intellectual products but not property rights; intellectual products
can also be protected from modifications because they may injury the
author’s reputation (Hughes 1988).

The “rule-utilitarian” argument holds that protecting
computer programs with the copyright or patent system results in an
increase and innovation of new products and in the corresponding
social utility (Moore 2008: 110–119). Opponents of the
rule-utilitarian approach to intellectual property rights challenge
the thesis that copyrighting or patenting software fosters innovation
and production. First, they argue, innovation can be directly
supported by governmental funding research projects, both at an
academic and an industry level, or by means of reward models (Shavell
& Ypersele 2001). Secondly, software copyrights and patents often
allow for a monopoly of companies, which impedes, rather than fosters,
innovation in order to keep the monopoly.

Most of the debate concerning intellectually property rights on
software focuses on John Locke’s arguments for property provided
in the Second Treatise on Government (Locke 1690, see also the entry
on
John Locke).
Locke famously argued that in the state of nature, all natural goods
were in common; by mixing a common good with one’s own labor,
one could claim ownership on such goods. Locke’s philosophy is
at the basis of the liberalist tradition of western countries. One
main difference between material and intellectual objects is that the
latter can be duplicated, and this is especially so for software. Some
philosophers argue that Locke’s arguments justify intellectual
property rights on software; others maintain, on the contrary, that
Locke’s philosophy rather supports the free-software view. In
Locke’s philosophy, ownership of material goods is justified
though labor “where there is enough and as good left for
others” (Locke 1690: section 27), so that the owner benefits
from the acquisition with no loss for the others (Moore 2008:
119–128). Possession of intellectual entities is not exclusive,
as it is for material ones: An intellectual object, such as a
mathematical function or a program specification, can be owned by many
people at the same time, whereas, if one owns a car, the same car
cannot be owned by one’s neighbor. Accordingly, the possession
of, say, a high-level language program does not constitute a loss for
others, and Locke’s proviso that there be “enough and as
good left for others” is always satisfied.

On the other hand, according to Locke, ownership of material goods is
justifiable because material entities are finite and it is not
feasible for anybody to possess whatever she or he would like (Kimppa
2005). However, intellectual objects can be shared by many people
concurrently without any deprivation for any of them (Kinsella 2001).
Locke’s arguments for property are coherent with the Free
Software Foundation view on software copyright: software can be sold,
but, once purchased, the buyer owns the software and she can do with
it whatever she wants, including giving free copies of it or modifying
it (see Free Software Foundation 1996 in
Other Internet Resources).
Indeed, software, by being an intellectual good, can be shared
without any loss for any of the owners.

Problems arise when reasoning about software property protection, that
is, either by the copyright or patent laws. In the United States
legislation, copyright protects authors of original works in
the realm of literature, music, drama, visual arts, and architectural
works, that are expressed in a tangible form (written,
depicted, sculptured, built etc.). Copyright confers to the authors,
and to those who receive permission by the former, the rights to
duplicate, reproduce, perform, sell or share copies, and create works
based upon the protected original work. Ideas, theories, procedures,
and methods are excluded from copyright protection. Patents
safeguards inventors, prohibiting others from selling, using, and
producing their invention. In particular, utility patents include
protection of processes, machines, and manufactures; design patents
cover new and original designs for manufactures; and plant patents
concern the production of new varieties of plants.

Copyrights give authors rights to copy a given text: Whereas
ideas are not copyrightable, ideas expressed in a text are. According
to some, copyright is the most appropriate tool to protect software
ownership (Mooers 1975). Whereas algorithms are abstract mathematical
ideas, which, as such, cannot be copyrighted, high-level language
programs are textual expressions of those algorithms, which can be
copyrighted. It can be objected that such a claim is too simplistic
and does not take into consideration the proper ontology of software
(Rapaport 2016, see
Other Internet Resources).
Indeed, computational artifacts can be examined at many levels of
abstraction, as a specification-implementation hierarchy in which each
layer is an implementation of the layer one finds upper in the
hierarchy. The main problem here is understanding what is
copyrightable, whether, functions, algorithms, programs, or machine
implementations of programs. For instance, algorithms themselves can
be considered as expressions of the functions they implement and,
consequently, are copyrightable. Another difficulty concerns copyright
infringement. If programs are to be considered protected expressions
of algorithms, copyright infringement only occurs in case of similar
program codes. However, consider the case of two programs behaviorally
equivalent (or similar) which are obtained by implementing different
programs instantiating different algorithms. According to
Mooers’ approach on software copyright, no infringement has to
be ascribed to such a case (Rapaport 2016: chapter 13, see
Other Internet Resources).

Similar problems arise with patents. Allen Newell (1986) opposed
software patents because available models of computations are
inadequate for defining, what is patentable. He argued that algorithms
are not patentable for the same reason that mathematical statements or
physical laws are not. Only processes and computing machines carrying
out those processes are patentable. However, degrees of abstraction in
a hierarchy defining a given artifact are not always such that they
allow algorithms to be distinguished from programs and
implementations: This is, for instance, the case for algorithms that
are directly executable by special-purpose machines.

10.2 Moral Responsibility of Computing Practitioners

Computer science cannot be considered a morally neutral
discipline (Gotterbarn 1991, 2001). When miscomputations are displayed
by some computational artifact interacting within its environment,
developers often blame clients who were not able to supply developers
with adequate specifications, or they appeal to the fact that software
testing cannot assure the absence of errors, or, more generally, they
blame a program’s complexity. In any case, computing
practitioners do not accept responsibility. In doing so, they fail to
recognize that, in the process of developing software, they are not
just instantiating specifications and implementing programs, but they
are additionally providing a service to society. A distinction can be
made between negative responsibility and positive
responsibility
(Ladd 1988). Negative responsibility avoids blame
and legal responsibility, and it characterizes software developers who
pursue the development of correct artifacts without
considering the potential effects and influences of the artifacts in
society. By contrast, positive responsibility considers the
consequences that the developed machine may have among users. A
correct computing system may still be harmful if some undesired
behaviors are not inhibited by the set of specifications provided by
clients and a positive responsible programmer should feel obliged to
retreat those specifications with clients in case she is aware of
those deficiencies.

Liability is not adequate to regulate the moral behaviors of computing
practitioners (Edgar 2003 [1997]: ch. 10). Indeed, blaming someone of
breaking the law requires a “causality condition” and a
“condition of intention”. The causality condition involves
identifying the person who caused some illegal event (such as the
murderer who pulled the trigger); the condition of intention demands
ascertaining the intentions of such a person (whether the person who
pulled the trigger intended to kill the victim or not). It is
difficult to satisfy both conditions in computing. No single person
can be blamed for causing a computing artifact to miscompute and harm
some people. It follows from the definition of miscomputation in
§7.5
that many people are involved in the causal chain that brings about a
harmful miscomputation, including clients, designers, programmers, and
engineers. It is also difficult to identify anyone and, if so, who
among them intended to develop the harmful artifact. In particular, if
a practitioner develops a system that is subsequently used with evil
intentions, the practitioners cannot be legally blamed; however, they
may be responsible if they were aware of the evil potentialities of
the artifact.

Moral responsibilities of computing professionals include
responsibilities to different groups of people (Loui & Miller
2008). Responsibilities to clients and users require
implementing artifacts that are not only correct and reliable, but are
also such that they do not have (or cannot be used to have)
undesirable effects on users. Responsibilities to employers
require not taking advantage of (personal, political, market-related)
secret information that employers may share with computing
professionals when assigning some given task. Responsibilities to
other professionals include the fulfillment of professional
standards when working in a team, as well as the respect of the
colleagues’ work. Finally, responsibilities to the
public require that all computational artifacts be aimed at
the well-being of society and that the construction of any potentially
dangerous artifact affecting the public welfare be impeded by
professionals even when required by employers (like being required to
encode a program that can gain private information from some data
system).

These and other moral responsibilities of computing professionals have
been codified in more than one software engineering “code of
ethics”. For instance, the “Software Engineering Code of
Ethics and Professional Practice” (see
Other Internet Resources),
developed by the ACM and the IEEE Computer Society, indicates eight
principles and clauses expressing how to fulfill those principles in
concrete situations (Gotterbarn, Miller, & Rogerson 1997):

  1. Public: computing professionals always “act only
    in ways consistent with the public safety, health, and welfare”;
  2. Client and Employer: professionals should be loyal and
    reliable with clients and employers;
  3. Product: computing professionals commit to providing
    high quality products, as much as possible free of errors;
  4. Judgment: practitioners “protect both the
    independence of their professional judgment ad their
    reputation”;
  5. Management: leaders should “act fairly and
    encourage those who they lead to meet their own and collective
    obligations”;
  6. Profession: computing professionals should preserve and
    increase their reputation of;
  7. Colleagues: team work should be positively supported;
  8. Self: continuing education for computing professionals
    is required in order to constantly improve their own abilities.

Applying the code of ethics is not straightforward, because, in may
concrete situations, one may find that principles trade off with each
other (Gotterbarn & Miller 2009). Common cases include the time
needed to test a given artifacts to assure for error absence, which
can conflict with the client’s or employer’s pressure to
satisfy market timings; or the more delicate case in which a
client’s or employer’s request for a given computing
system to be implemented is in conflict with the public’s
safety, health, or welfare. The eight principles are listed according
to a priority hierarchy so that the code can, in some cases, provide
guidelines to solve conflicts among competing moral principles. In
particular, “Public” being at the top of the list means
that computing practitioners are morally committed to always refuse
clients’ and employers’ requests to realize artifacts that
may go against the public interest (Gotterbarn & Miller 2009).

A final issue worth mentioning here is the values in design
approach (Nissenbaum 1998). Computational artifacts should fulfill
moral values together with common functional requirements. Beside
correctness, reliability, and safety, computing systems should
instantiate moral values including justice, autonomy, liberty, trust,
privacy, security, friendship, freedom, comfort, and equality. For
instance, a system not satisfying equality is a biased program, that
is, an artifact that “systematically and unfairly
discriminate
against certain individuals or groups of individuals
in favor of others” (for instance, flight reservation systems
that list flight companies in an alphabetic order have been shown to
favor companies at the top of the list) (Friedman & Nissenbaum
1996: 332). Whereas everybody would agree that computing artifacts
should satisfy those moral values, the values in design approach holds
that those values should be treated on a par with functional
requirements in software development (Flanagan, Howe, & Nissenbaum
2008). This requires (i) identifying the set of moral values a given
artifact should fulfill, taking into consideration the socio-cultural
context where the artifact is going to be used; (ii) define those
values so that they could be formalized in design specifications and
subsequently implemented; (iii) verifying whether the implemented
artifact fulfill or does not fulfill the specified values, by using
common software testing techniques, in particular internal testing
among developers, user testing in restricted environments, or by using
prototypes, interviews, and surveys.

11. Applications of Computer Science

We have concentrated on the philosophical concerns of the core of the
discipline of computer science. We have said little to nothing about
the actual applications of the subject, applications many would argue
give the discipline its potency. Applications include not just
technological ones such as systems that run nuclear power stations and
that guide missiles to their targets, but scientific ones such as
those involved in computational biology,
cognitive science,
and the
computational theory of mind.
However, no matter how useful and impressive are these applications,
they have very specialized goals. Presumably, the goals of
computational biology are biological and those of cognitive science
are psychological. In contrast, the core of the philosophy of computer
science does not have the goals of any particular application. It is
concerned with the generic activity of programming a computer.

However, one application is so central that it is often taken to be
part of the core of the subject, and this is artificial intelligence.
In itself, it has contributed much to the development of the core,
including the design of programming language such as Lisp and Prolog.
Moreover, it raises many philosophical concerns that have strong
connections with the philosophies of mind and cognitive science.
Indeed, the philosophical concerns of artificial intelligence have a
much older pedigree (Copeland 1993; Fetzer 1990). There is too much
material to include in this entry that is devoted to the generic
activity of the discipline. Fortunately, there is already an entry
devoted to the role of
logic in artificial intelligence,
and the subject is to be the subject of a future entry of the
philosophy of artificial intelligence.

Bibliography

  • Abramsky, Samson & Guy McCusker, 1995, “Games and Full
    Abstraction for the Lazy (lambda)-Calculus”, In D. Kozen
    (ed.) Tenth Annual IEEEE Symposium on Logic in Computer
    Science
    , IEEE Computer Society Press, pp. 234–43.
    doi:10.1109/LICS.1995.523259
  • Abramsky, Samson, Pasquale Malacaria, & Radha Jagadeesan,
    1994, “Full Abstraction for PCF”, in M. Hagiya & J.C.
    Mitchell (eds), Theoretical Aspects of Computer Software:
    International Symposium TACS ‘94, Sendai, Japan, April
    19–22, 1994
    , Springer-Verlag, 1–15.
  • Abrial, Jean-Raymond, 1996, The B-Book: Assigning Programs to
    Meanings
    , Cambridge: Cambridge University Press.
  • Alama, Jesse, 2015, “The Lambda Calculus”, The
    Stanford Encyclopedia of Philosophy
    (Spring 2015 Edition), Edward
    N. Zalta (ed.), URL =
    .
  • Allen, Robert J., 1997, A Formal Approach to Software
    Architecture
    , Ph.D. Thesis, Computer Science, Carnegie Mellon
    University. Issued as CMU Technical Report CMU-CS-97-144.
    Allen 1997 available on line
  • Ammann, Paul & Jeff Offutt, 2008, Introduction to Software
    Testing
    , Cambridge: Cambridge University Press.
  • Angius, Nicola, 2013a, “Abstraction and Idealization in the
    Formal Verification of Software”, Minds and Machines,
    23(2): 211–226. doi:10.1007/s11023-012-9289-8
  • –––, 2013b, “Model-Based Abductive
    Reasoning in Automated Software Testing”, Logic Journal of
    IGPL
    , 21(6): 931–942. doi:10.1093/jigpal/jzt006
  • –––, 2014, “The Problem of Justification
    of Empirical Hypotheses in Software Testing”, Philosophy
    & Technology
    , 27(3): 423–439.
    doi:10.1007/s13347-014-0159-6
  • Angius, Nicola & Guglielmo Tamburrini, 2011, “Scientific
    Theories of Computational Systems in Model Checking”, Minds
    and Machines
    , 21(2), 323–336.
    doi:10.1007/s11023-011-9231-5
  • –––, forthcoming, “Explaining Engineered
    Computing Systems’ Behaviour: the Role of Abstraction and
    Idealization”, Philosophy & Technology, first
    online 1 October 2016. doi:10.1007/s13347-016-0235-1
  • Arkoudas, Konstantine & Selmer Bringsjord, 2007,
    “Computers, Justification, and Mathematical Knowledge”,
    Minds and Machines, 17(2): 185–202.
    doi:10.1007/s11023-007-9063-5
  • Ashenhurst, Robert L. (ed.), 1989, “Letters in the ACM
    Forum”, Communications of the ACM, 32(3): 287.
    doi:10.1145/62065.315925
  • Baier, Christel & Joost-Pieter Katoen, 2008, Principles of
    Model Checking
    , Cambridge, MA: The MIT Press.
  • Bass, Len, Paul C. Clements, & Rick Kazman, 2003
    [1997], Software Architecture in Practice, second edition,
    Reading, MA: Addison-Wesley; first edition 1997; third edition,
    2012.
  • Bechtel, William & Adele Abrahamsen, 2005, “Explanation:
    A Mechanist Alternative”, Studies in History and Philosophy
    of Science Part C: Studies in History and Philosophy of Biological and
    Biomedical Sciences
    , 36(2): 421–441.
    doi:10.1016/j.shpsc.2005.03.010
  • Boghossian, Paul A., 1989, “The Rule-following
    Considerations”, Mind, 98(392): 507–549.
    doi:10.1093/mind/XCVIII.392.507
  • Bourbaki, Nicolas, 1968, Theory of Sets, Ettore Majorana
    International Science Series, Paris: Hermann.
  • Bridges, Douglas & Palmgren Erik, 2013, “Constructive Mathematics”,
    The Stanford Encyclopedia of Philosophy (Winter 2013
    Edition), Edward N. Zalta (ed.), URL =
    .
  • Brooks, Frederick P. Jr., 1995, The Mythical Man Month: Essays
    on Software Engineering, Anniversary Edition
    , Reading, MA:
    Addison-Wesley.
  • –––, 1996, “The Computer Scientist as
    Toolsmith II”, Communications of the ACM, 39(3),
    61–68. doi:10.1145/227234.227243
  • Burge, Tyler, 1998, “Computer Proof, Apriori Knowledge, and
    Other Minds”, Noûs, 32(S12): 1–37.
    doi:10.1111/0029-4624.32.s12.1
  • Bynum, Terrell Ward, 2008, “Milestones in the History of
    Information and Computer Ethics”, in Himma and Tavani 2008:
    25–48. doi:10.1002/9780470281819.ch2
  • Callahan, John, Francis Schneider, & Steve Easterbrook, 1996,
    “Automated Software Testing Using Model-Checking”, in
    Proceeding Spin Workshop, J.C. Gregoire, G.J. Holzmann and D.
    Peled (eds), New Brunswick, NJ: Rutgers University, pp.
    118–127.
  • Cardelli, Luca & Peter Wegner, 1985, “On Understanding
    Types, Data Abstraction, and Polymorphism”, Computing
    Surveys
    , 17(4): 471–522.
    [Cardelli and Wegner 1985 available online]
  • Chalmers, David J., 1996, “Does a Rock Implement Every
    Finite-State Automaton?” Synthese, 108(3):
    309–33.
    [D.J. Chalmers 1996 available online] doi:10.1007/BF00413692
  • Clarke, Edmund M. Jr., Orna Grumberg, & Doron A. Peled, 1999,
    Model Checking, Cambridge, MA: The MIT Press.
  • Colburn, Timothy R., 1999, “Software, Abstraction, and
    Ontology”, The Monist, 82(1): 3–19.
    doi:10.5840/monist19998215
  • –––, 2000, Philosophy and Computer
    Science
    , Armonk, NY: M.E. Sharp.
  • Colburn, Timothy & Gary Shute, 2007, “Abstraction in
    Computer Science”, Minds and Machines, 17(2):
    169–184. doi:10.1007/s11023-007-9061-7
  • Copeland, B. Jack, 1993, Artificial Intelligence: A
    Philosophical Introduction
    , John Wiley & Sons.
  • –––, 1996, “What is Computation?”
    Synthese, 108(3): 335–359. doi:10.1007/BF00413693
  • –––, 2015, “The Church-Turing
    Thesis”, The Stanford Encyclopedia of Philosophy (Summer
    2015 Edition), Edward N. Zalta (ed.), URL =
    .
  • Copeland, B. Jack & Oron Shagrir, 2007, “Physical
    Computation: How General are Gandy’s Principles for
    Mechanisms?” Minds and Machines, 17(2): 217–231.
    doi:10.1007/s11023-007-9058-2
  • –––, 2011, “Do Accelerating Turing
    Machines Compute the Uncomputable?” Minds and Machines,
    21(2): 221–239. doi:10.1007/s11023-011-9238-y
  • Cummins, Robert, 1975, “Functional Analysis”, The
    Journal of Philosophy
    , 72(20): 741–765. doi:10.2307/2024640
  • De Millo, Richard A., Richard J. Lipton, & Alan J. Perlis,
    1979, “Social Processes and Proofs of Theorems and
    Programs”, Communications of the ACM, 22(5):
    271–281. doi:10.1145/359104.359106
  • Denning, Peter J., 2005, “Is Computer Science
    Science?”, Communications of the ACM, 48(4):
    27–31. doi:10.1145/1053291.1053309
  • –––, 2007, “Computing is a Natural
    Science”, Communications of the ACM, 50(7):
    13–18. doi:10.1145/1272516.1272529
  • Denning, Peter J., Edward A. Feigenbaum, Paul Gilmore, Anthony C.
    Hearn, Robert W. Ritchie, & Joseph F. Traub, 1981, “A
    Discipline in Crisis”, Communications of the ACM,
    24(6): 370–374. doi:10.1145/358669.358682
  • Devlin, Keith, 1994, Mathematics: The Science of Patterns: The
    Search for Order in Life, Mind, and the Universe
    , New York: Henry
    Holt.
  • Dijkstra, Edsger W., 1970, Notes on Structured
    Programming
    , T.H.-Report 70-WSK-03, Mathematics Technological
    University Eindhoven, The Netherlands.
    [Dijkstra 1970 available online]
  • –––, 1974, “Programming as a Discipline of
    Mathematical Nature”, American Mathematical Monthly,
    81(6): 608–612.
    [Dijkstra 1974 available online
  • Distributed Software Engineering, 1997, The Darwin
    Language
    , Department of Computing, Imperial College of Science,
    Technology and Medicine, London.
    [Darwin language 1997 available online]
  • Duncan, William, 2011, “Using Ontological Dependence to
    Distinguish between Hardware and Software”, Proceedings of
    the Society for the Study of Artificial Intelligence and Simulation of
    Behavior Conference: Computing and Philosophy
    , University of
    York, York, UK.
    [Duncan 2011 available online (zip file)]
  • Duhem, Pierre Maurice Marie, [1906], The Aim and Structure of
    Physical Theory
    (original: Théorie physique: son objet et
    sa structure), Princeton: Princeton University Press, 1954.
  • Dummett, Michael A.E., 2006, Thought and Reality, Oxford:
    Oxford University Press.
  • Eden, Amnon H., 2007, “Three Paradigms of Computer
    Science”, Minds and Machines, 17(2): 135–167.
    doi:10.1007/s11023-007-9060-8
  • Egan, Frances, 1992, “Individualism, Computation, and
    Perceptual Content”, Mind, 101(403): 443–59.
    doi:10.1093/mind/101.403.443
  • Edgar, Stacey L., 2003 [1997], Morality and Machines: Perspectives on
    Computer Ethics
    , Sudbury, MA: Jones & Bartlett Learning.
  • Fernández, Maribel, 2004, Programming Languages and
    Operational Semantics: An Introduction
    , London: King’s
    College Publications.
  • Fetzer, James H., 1988, “Program Verification: The Very
    Idea”, Communications of the ACM, 31(9):
    1048–1063. doi:10.1145/48529.48530
  • –––, 1990, Artificial Intelligence: Its
    Scope and Limits
    , Dordrecht: Springer Netherlands.
  • Feynman, Richard P., 1984–1986, Feynman Lectures on
    Computation
    , Cambridge, MA: Westview Press, 2000.
  • Flanagan, Mary, Daniel C. Howe, & Helen Nissenbaum, 2008,
    “Embodying Values in Technology: Theory and Practice”, in
    Information Technology and Moral Philosophy, Jeroen van den
    Hoven and John Weckert (eds), Cambridge: Cambridge University Press,
    322–353.
  • Floridi, Luciano, 2008, “The Method of Levels of
    Abstraction”, Minds and Machines, 18(3): 303–329.
    doi:10.1007/s11023-008-9113-7
  • Floridi, Luciano, Nir Fresco, & Giuseppe Primiero, 2015,
    “On Malfunctioning Software”, Synthese, 192(4):
    1199–1220. doi:10.1007/s11229-014-0610-3
  • Floyd, Robert W., 1979, “The Paradigms of
    Programming”, Communications of the ACM, 22(8):
    455–460. doi:10.1145/1283920.1283934
  • Fowler, Martin, 2003, UML Distilled: A Brief Guide to the
    Standard Object Modeling Language
    , 3rd edition,
    Reading, MA: Addison-Wesley.
  • Franssen, Maarten, Gert-Jan Lokhorst, & Ibio van de Poel,
    2013, “Philosophy of Technology”, The Stanford
    Encyclopedia of Philosophy
    (Winter 2013 Edition), Edward N. Zalta
    (ed.), URL =
    .
  • Frege, Gottlob, 1914, “Letter to Jourdain”, reprinted
    in Frege 1980: 78–80.
  • –––, 1980, Gottlob Frege: Philosophical and
    Mathematical Correspondence,
    edited by G. Gabriel, H. Hermes, F.
    Kambartel, C. Thiel, and A. Veraart, Oxford: Blackwell
    Publishers.
  • Fresco, Nir & Giuseppe Primiero, 2013,
    “Miscomputation”, Philosophy & Technology,
    26(3): 253–272. doi:10.1007/s13347-013-0112-0
  • Friedman, Batya & Helen Nissenbaum, 1996, “Bias in
    Computer Systems”, ACM Transactions on Information Systems
    (TOIS)
    , 14(3): 330–347. doi:10.1145/230538.230561
  • Frigg, Roman & Stephan Hartmann, 2012, “Models in
    Science”, The Stanford Encyclopedia of Philosophy (Fall
    2012 Edition), Edward N. Zalta (ed.), URL
    =.
  • Gagliardi, Francesco, 2007, “Epistemological Justification
    of Test Driven Development in Agile Processes”, Agile
    Processes in Software Engineering and Extreme Programming: Proceedings
    of the 8th International Conference, XP 2007, Como, Italy, June
    18–22, 2007
    , Berlin: Springer Berlin Heidelberg,
    253–256. doi:10.1007/978-3-540-73101-6_48
  • Gamma, Erich, Richard Helm, Ralph Johnson, & John Vlissides,
    1994, Design Patterns: Elements of Reusable Object-Oriented
    Software
    , Reading, MA: Addison-Wesley.
  • Glennan, Stuart S., 1996, “Mechanisms and the Nature of
    Causation”, Erkenntnis, 44(1): 49–71.
    doi:10.1007/BF00172853
  • Glüer, Kathrin & Åsa Wikforss, 2015,
    “The Normativity of Meaning and Content”,
    The Stanford Encyclopedia of Philosophy (Summer 2015
    Edition), Edward N. Zalta (ed.), URL =
    .
  • Goguen, Joseph A. & Rod M. Burstall, 1985,
    “Institutions: Abstract Model Theory for Computer
    Science”, Report CSLI-85-30, Center for the Study of Language and Information at Stanford University.
  • –––, 1992, “Institutions: Abstract Model
    Theory for Specification and Programming”, Journal of the
    ACM (JACM)
    , 39(1), 95–146. doi:10.1145/147508.147524
  • Gordon, Michael J.C., 1979, The Denotational Description of
    Programming Languages
    , New York: Springer-Verlag.
  • Gotterbarn, Donald, 1991, “Computer Ethics: Responsibility
    Regained”, National Forum: The Phi Beta Kappa Journal,
    71(3): 26–31.
  • –––, 2001, “Informatics and Professional
    Responsibility”, Science and Engineering Ethics, 7(2):
    221–230. doi:10.1007/s11948-001-0043-5
  • Gotterbarn, Donald, Keith Miller, & Simon Rogerson, 1997,
    “Software Engineering Code of Ethics”, Information
    Society
    , 40(11): 110–118. doi:10.1145/265684.265699
  • Gotterbarn, Donald & Keith W. Miller, 2009, “The Public
    is the Priority: Making Decisions Using the Software Engineering Code
    of Ethics”, IEEE Computer, 42(6): 66–73.
    doi:10.1109/MC.2009.204
  • Gruner, Stefan, 2011, “Problems for a Philosophy of Software
    Engineering”, Minds and Machines, 21(2): 275–299.
    doi:10.1007/s11023-011-9234-2
  • Gunter, Carl A., 1992, Semantics of Programming Languages:
    Structures and Techniques
    , Cambridge, MA: MIT Press.
  • Gupta, Anil, 2014, “Definitions”, The Stanford
    Encyclopedia of Philosophy
    (Fall 2014 Edition), Edward N. Zalta
    (ed.), URL =
    .
  • Hagar, Amit, 2007, “Quantum Algorithms: Philosophical
    Lessons”, Minds and Machines, 17(2): 233–247.
    doi:10.1007/s11023-007-9057-3
  • Hale, Bob, 1987, Abstract Objects, Oxford: Basil
    Blackwell.
  • Hales, Thomas C., 2008, “Formal Proof”, Notices of
    the American Mathematical Society
    , 55(11): 1370–1380.
  • Hankin, Chris, 2004, An Introduction to Lambda Calculi for
    Computer Scientists
    , London: King’s College
    Publications.
  • Harrison, John, 2008, “Formal Proof—Theory and
    Practice”, Notices of the American Mathematical
    Society
    , 55(11): 1395–1406.
  • Hartmanis, Juris, 1981, “Nature of Computer Science and Its
    Paradigms”, pp. 353–354 (in Section 1) of “Quo
    Vadimus: Computer Science in a Decade”, J.F. Traub
    (ed.), Communications of the ACM, 24(6):
    351–369. doi:10.1145/358669.358677
  • –––, 1993, “Some Observations About the
    Nature of Computer Science”, in International Conference on
    Foundations of Software Technology and Theoretical Computer
    Science
    , Springer Berlin Heidelberg, pp. 1–12.
    doi:10.1007/3-540-57529-4_39
  • Henson, Martin C., 1987, Elements of Functional
    Programming
    , Oxford: Blackwell.
  • Hilbert, David, 1931, “The Grounding of Elementary Number
    Theory”, reprinted in P. Mancosu (ed.), 1998, From Brouwer
    to Hilbert: the Debate on the Foundations of Mathematics in the
    1920s
    , New York: Oxford University Press, pp. 266–273.
  • Hilbert, David & Wilhelm Ackermann, 1928,
    Grundzüge Der Theoretischen Logik, translated
    as Principles of Mathematical Logic, Lewis M. Hammond, George
    G. Leckie, and F. Steinhardt (trans.), New York: Chelsea, 1950.
  • Himma, Kenneth Einar & Herman T. Tavani (eds.), 2008, The
    Handbook of Information and Computer Ethics
    , New Jersey: John
    Wiley & Sons.
  • Hoare, C.A.R., 1969, “An Axiomatic Basis for Computer
    Programming”, Communications of the ACM, 12(10):
    576–580. doi:10.1145/363235.363259
  • –––, 1973, “Notes on Data
    Structuring”, in O.-J. Dahl, E.W. Dijkstra, and C.A.R. Hoare
    (eds.), Structured Programming, London: Academic Press, pp.
    83–174.
  • –––, 1981, “The Emperor’s Old
    Clothes”, Communications of the ACM, 24(2):
    75–83. doi:10.1145/1283920.1283936
  • –––, 1985, Communicating Sequential
    Processes
    , Englewood Cliffs, NJ: Prentice Hall.
    [Hoare 1985 available online]
  • –––, 1986, The Mathematics of Programming:
    An Inaugural Lecture Delivered Before the University of Oxford on Oct.
    17, 1985
    , Oxford: Oxford University Press, Inc.
  • Hodges, Andrews, 2011, “Alan Turing”, The Stanford
    Encyclopedia of Philosophy
    (Summer 2011 Edition), Edward N. Zalta
    (ed.), URL =
    .
  • Hodges, Wilfrid, 2013, “Model Theory”, The
    Stanford Encyclopedia of Philosophy
    (Fall 2013 Edition), Edward
    N. Zalta (ed.), forthcoming URL =
    .
  • Hopcroft, John E. & Jeffrey D. Ullman, 1969, Formal
    Languages and their Relation to Automata
    , Reading, MA:
    Addison-Wesley.
  • Hughes, Justin, 1988, “The Philosophy of Intellectual
    Property”, Georgetown Law Journal, 77: 287.
  • Irmak, Nurbay, 2012, “Software is an Abstract
    Artifact”, Grazer Philosophische Studien, 86(1):
    55–72.
  • Johnson, Christopher W., 2006, “What are Emergent Properties
    and How Do They Affect the Engineering of Complex Systems”,
    Reliability Engineering and System Safety, 91(12):
    1475–1481.
    [Johnson 2006 available online]
  • Jones, Cliff B., 1990 [1986], Systematic Software
    Development Using VDM
    , second edition, Englewood Cliffs,
    NJ: Prentice Hall.
    [Jones 1990 available online]
  • Kimppa, Kai, 2005, “Intellectual Property Rights in
    Software—Justifiable from a Liberalist Position? Free Software
    Foundation’s Position in Comparison to John Locke’s
    Concept of Property”, in R.A. Spinello & H.T. Tavani (Eds.),
    Intellectual Property Rights in a Networked World: Theory and
    Practice
    , Hershey, PA: Idea, pp. 67–82.
  • Kinsella, N. Stephan, 2001, “Against Intellectual
    Property”, Journal of Libertarian Studies, 15(2):
    1–53.
  • Knuth, Donald E., 1974a, “Computer Programming as an
    Art”, Communications of the ACM, 17(12): 667–673.
    doi:10.1145/1283920.1283929
  • –––, 1974b, “Computer Science and Its
    Relation to Mathematics”, The American Mathematical
    Monthly
    , 81(4): 323–343.
  • –––, 1977, “Algorithms”,
    Scientifc American, 236(4): 63–80.
  • Kripke, Saul, 1982, Wittgenstein on Rules and Private
    Language
    , Cambridge, MA: Harvard University Press.
  • Kroes, Peter, 2010, “Engineering and the Dual Nature of
    Technical Artefacts”, Cambridge Journal of Economics,
    34(1): 51–62. doi:10.1093/cje/bep019
  • –––, 2012, Technical Artefacts: Creations of
    Mind and Matter: A Philosophy of Engineering Design
    , Dordrecht:
    Springer.
  • Kroes, Peter & Anthonie Meijers, 2006, “The Dual Nature
    of Technical Artefacts”, Studies in History and Philosophy
    of Science
    , 37(1): 1–4.
    doi:10.1016/j.shpsa.2005.12.001
  • Kröger, Fred & Stephan Merz, 2008, Temporal Logics
    and State Systems
    , Berlin: Springer.
  • Littlewood, Bev & Lorenzo Strigini, 2000, “Software
    Reliability and Dependability: a Roadmap”, ICSE ’00
    Proceedings of the Conference on the Future of Software Engineering,
    175–188. doi:10.1145/336512.336551
  • Ladd, John, 1988, “Computers and Moral Responsibility: a
    Framework for An Ethical Analysis”, in Carol C. Gould, (ed.),
    The Information Web: Ethical & Social Implications of Computer
    Networking
    , Boulder, CO: Westview Press.
  • Landin, P.J., 1964, “The Mechanical Evaluation of
    Expressions”, The Computer Journal, 6(4):
    308–320. doi:10.1093/comjnl/6.4.308
  • Locke, John, 1690, The Second Treatise of Government.
    [Locke 1690 available online]
  • Loewenheim, Ulrich, 1989, “Legal Protection for Computer
    Programs in West Germany”, Berkeley Technology Law
    Journal
    , 4(2): 187–215.
    [Loewenheim 1989 available online] doi:10.15779/Z38Q67F
  • Long, Roderick T., 1995, “The Libertarian Case Against
    Intellectual Property Rights”, Formulations, Autumn,
    Free Nation Foundation.
  • Loui, Michael C. & Keith W. Miller, 2008, “Ethics and
    Professional Responsibility in Computing”, Wiley
    Encyclopedia of Computer Science and Engineering
    , Benjamin Wah
    (ed.), John Wiley & Sons.
    [Loui and Miller 2008 available online]
  • Luckham, David C., 1998, “Rapide: A Language and Toolset for
    Causal Event Modeling of Distributed System Architectures”, in
    Y. Masunaga, T. Katayama, and M. Tsukamoto (eds.), Worldwide
    Computing and its Applications, WWCA’98
    , Berlin: Springer,
    pp. 88–96. doi:10.1007/3-540-64216-1_42
  • Machamer, Peter K., Lindley Darden, & Carl F. Craver, 2000,
    “Thinking About Mechanisms”, Philosophy of
    Science
    67(1): 1–25. doi:10.1086/392759
  • Magee, Jeff, Naranker Dulay, Susan Eisenbach, & Jeff Kramer,
    1995, “Specifying Distributed Software Architectures”,
    Proceedings of 5th European Software Engineering Conference (ESEC
    95)
    , Berlin: Springer-Verlag, pp. 137–153.
  • Martin-Löf, Per, 1982, “Constructive Mathematics and
    Computer Programming”, in Logic, Methodology and Philosophy
    of Science
    (Volume VI: 1979), Amsterdam: North-Holland, pp.
    153–175.
  • McGettrick, Andrew, 1980, The Definition of Programming
    Languages
    , Cambridge: Cambridge University Press.
  • McLaughlin, Peter, 2001, What Functions Explain: Functional
    Explanation and Self-Reproducing Systems
    , Cambridge: Cambridge
    University Press.
  • Meijers, A.W.M., 2001, “The Relational Ontology of Technical
    Artifacts”, in P.A. Kroes and A.W.M. Meijers (eds), The
    Empirical Turn in the Philosophy of Technology
    , Amsterdam:
    Elsevier.
  • Mitchelmore, Michael & Paul White, 2004, “Abstraction in
    Mathematics and Mathematics Learning”, in M.J. Høines and
    A.B. Fuglestad (eds.), Proceedings of the 28th Conference of the
    International Group for the Psychology of Mathematics Education

    (Volume 3), Bergen: Programm Committee, pp. 329–336.
    [Mitchelmore and White 2004 available online]
  • Miller, Alexander & Crispin Wright (eds), 2002, Rule
    Following and Meaning
    , Montreal/Ithaca: McGill-Queen’s University
    Press.
  • Milne, Robert & Christopher Strachey, 1976, A Theory of
    Programming Language Semantics
    , London: Chapman and Hall.
  • Mitchell, John C., 2003, Concepts in Programming
    Languages
    , Cambridge: Cambridge University Press.
  • Monin, Jean François, 2003, Understanding Formal
    Methods
    , Michael G. Hinchey (ed.), London: Springer (this is
    Monin’s translation of his own Introduction aux Méthodes
    Formelles
    , Hermes, 1996, first edition; 2000, second edition),
    doi:10.1007/978-1-4471-0043-0
  • Mooers, Calvin N., 1975, “Computer Software and
    Copyright”, ACM Computing Surveys, 7(1): 45–72.
    doi:10.1145/356643.356647
  • Moor, James H., 1978, “Three Myths of Computer
    Science”, The British Journal for the Philosophy of
    Science
    , 29(3): 213–222.
  • –––, 1985, “What is Computer
    Ethics?”, Metaphilosophy, 16(4): 266–275.
  • Moore, Adam D., 2001, Intellectual Property and Information
    Control: Philosophic Foundations and Contemporary Issues
    , New
    Brunswick, NJ: Transaction Publishers.
  • –––, 2008, “Personality-Based,
    Rule-Utilitarian, and Lockean Justification of Intellectual
    Property” in Himma and Tavani 2008: 105–130.
    doi:10.1002/9780470281819.ch5
  • Newell, Allen, 1986, “Response: The Models Are
    Broken, the Models Are Broken”, University of Pittsburgh Law
    Review
    , 47: 1023–1031.
  • Newell, Alan & Herbert A. Simon, 1976, “Computer Science
    as Empirical Inquiry: Symbols and Search”, Communications of
    the ACM
    , 19(3): 113–126. doi:10.1145/1283920.1283930
  • Newell, Allen, Alan J. Perlis, & Herbert A. Simon, 1967,
    “Computer Science”, Science, 157(3795):
    1373–1374. doi:10.1126/science.157.3795.1373-b
  • Nissenbaum, Helen , 1998, “Values in the Design of Computer
    Systems”, Computers and Society, 28(1):
    38–39.
  • Northover, Mandy, Derrick G. Kourie, Andrew Boake, Stefan Gruner,
    & Alan Northover, 2008, “Towards a Philosophy of Software
    Development: 40 Years After the Birth of Software Engineering”,
    Journal for General Philosophy of Science, 39(1):
    85–113. doi:10.1007/s10838-008-9068-7
  • Pears, David Francis, 2006, Paradox and Platitude in
    Wittgenstein’s Philosophy
    , Oxford: Oxford University Press.
    doi:10.1093/acprof:oso/9780199247707.001.0001
  • Piccinini, Gualtiero, 2007, “Computing Mechanisms”,
    Philosophy of Science, 74(4): 501–526.
    doi:10.1086/522851
  • –––, 2008, “Computation without
    Representation”, Philosophical Studies, 137(2):
    206–241.
    [Piccinini 2008 available online] doi:10.1007/s11098-005-5385-4
  • –––, 2015, Physical Computation: A
    Mechanistic Account
    , Oxford: Oxford University Press.
    doi:10.1093/acprof:oso/9780199658855.001.0001
  • Piccinini, Gualtiero & Carl Craver, 2011, “Integrating
    Psychology and Neuroscience: Functional Analyses as Mechanism
    Sketches”, Synthese, 183(3), 283–311.
    doi:10.1007/s11229-011-9898-4
  • Popper, Karl R., 1959, The Logic of Scientific Discovery,
    London: Hutchinson.
  • Rapaport, William J., 1995, “Understanding Understanding:
    Syntactic Semantics and Computational Cognition”, in Tomberlin
    (ed.), Philosophical Perspectives, Vol. 9: AI, Connectionism,
    and Philosophical Psychology (Atascadero, CA: Ridgeview): 49–88.
    [Rapaport 1995 available online] doi:10.2307/2214212
  • –––, 1999, “Implementation Is Semantic
    Interpretation”, The Monist, 82(1): 109–30.
    [Rapaport 1999 available online]
  • –––, 2005, “Implementation as Semantic
    Interpretation: Further Thoughts”, Journal of
    Experimental& Theoretical Artificial Intelligence
    , 17(4):
    385–417.
    [Rapaport 2005 available online]
  • Rombach, Dieter & Frank Seelisch, 2008, “Formalisms in
    Software Engineering: Myths Versus Empirical Facts”, In
    Balancing Agility and Formalism in Software Engineering,
    Springer Berlin Heidelberg, pp. 13–25.
    doi:10.1007/978-3-540-85279-7_2
  • Schiaffonati, Viola, 2015, “Stretching the Traditional
    Notion of Experiment in Computing: Explorative Experiments”,
    Science and Engineering Ethics, 22(3): 1–19.
    doi:10.1007/s11948-015-9655-z
  • Schiaffonati, Viola & Mario Verdicchio, 2014, “Computing
    and Experiments”, Philosophy & Technology, 27(3):
    359–376. doi:10.1007/s13347-013-0126-7
  • Searle, John R., 1995, The Construction of Social
    Reality
    , New York: Free Press.
  • Shanker, S.G., 1987, “Wittgenstein versus Turing on the
    Nature of Church’s Thesis”, Notre Dame Journal of
    Formal Logic
    , 28(4): 615–649.
    [Shanker 1987 available online] doi:10.1305/ndjfl/1093637650
  • Shavell, Steven & Tanguy van Ypersele, 2001, “Rewards
    Versus Intellectual Property Rights”, Journal of Law and
    Economics
    , 44: 525–547
  • Skemp, Richard R., 1987, The Psychology of Learning
    Mathematics
    , Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Smith, Brian Cantwell, 1985, “The Limits of Correctness in
    Computers”, ACM SIGCAS Computers and Society,
    14–15(1–4): 18–26. doi:10.1145/379486.379512
  • Snelting, Gregor, 1998, “Paul Feyerabend and Software
    Technology”, Software Tools for Technology Transfer,
    2(1): 1–5. doi:10.1007/s100090050013
  • Sommerville, Ian, 2016 [1982], Software Engineering,
    Reading, MA: Addison-Wesley; first edition, 1982.
  • Sprevak, Mark, 2012, “Three Challenges to Chalmers on
    Computational Implementation”, Journal of Cognitive
    Science
    , 13(2): 107–143.
  • Stoy, Joseph E., 1977, Denotational Semantics: The
    Scott-Strachey Approach to Programming Language Semantics
    ,
    Cambridge, MA: MIT Press.
  • Strachey, Christopher, 2000, “Fundamental Concepts in
    Programming Languages”, Higher-Order and Symbolic
    Computation
    , 13(1–2): 11–49.
    doi:10.1023/A:1010000313106
  • Suber, Peter, 1988, “What Is Software?” Journal of
    Speculative Philosophy
    , 2(2): 89–119.
    [Suber 1988 available online]
  • Suppe, Frederick, 1989, The Semantic Conception of Theories
    and Scientific Realism
    , Chicago: University of Illinois
    Press.
  • Suppes, Patrick, 1960, “A Comparison of the Meaning and Uses
    of Models in Mathematics and the Empirical Sciences”,
    Synthese, 12(2): 287–301. doi:10.1007/BF00485107
  • –––, 1969, “Models of Data”, in
    Studies in the Methodology and Foundations of Science,
    Springer Netherlands, pp. 24–35.
  • Technical Correspondence, Corporate, 1989, Communications of
    the ACM
    , 32(3): 374–381. Letters from James C. Pleasant,
    Lawrence Paulson/Avra Cohen/Michael Gordon, William Bevier/Michael
    Smith/William Young, Thomas Clune, Stephen Savitzky, James Fetzer.
    doi:10.1145/62065.315927
  • Tedre, Matti, 2011, “Computing as a Science: A Survey of
    Competing Viewpoints”, Minds and Machines, 21(3):
    361–387. doi:10.1007/s11023-011-9240-4
  • –––, 2015, The Science of Computing: Shaping
    a Discipline
    , Boca Raton: CRC Press, Taylor and Francis
    Group.
  • Tedre, Matti & Ekki Sutinen, 2008, “Three Traditions of
    Computing: What Educators Should Know”, Computer Science
    Education
    , 18(3): 153–170.
    doi:10.1080/08993400802332332
  • Thomasson, Amie, 2007, “Artifacts and Human Concepts”,
    in Eric Margolis and Stephen Laurence (eds), Creations of the
    Mind: Essays on Artifacts and Their Representations
    , Oxford:
    Oxford University Press.
  • Thompson, Simon, 2011, Haskell: The Craft of Functional
    Programming
    , third edition, Reading, MA: Addison-Wesley. (First
    edition, 1996)
  • Tichy, Walter F., 1998, “Should Computer Scientists
    Experiment More?”, IEEE Computer, 31(5): 32–40.
    doi:10.1109/2.675631
  • Turing, A.M., 1936, “On Computable Numbers, with an
    Application to the Entscheidungsproblem”,
    Proceedings of the London Mathematical Society (Series 2),
    42: 230–65. doi:10.1112/plms/s2-42.1.230
  • –––, 1950, “Computing Machinery and
    Intelligence”, Mind, 59(236): 433–460.
    doi:10.1093/mind/LIX.236.433
  • Turner, Raymond, 2007, “Understanding Programming
    Languages”, Minds and Machines, 17(2): 203–216.
    doi:10.1007/s11023-007-9062-6
  • –––, 2009a, Computable Models, Berlin:
    Springer. doi:10.1007/978-1-84882-052-4
  • –––, 2009b, “The Meaning of Programming
    Languages”, APA Newsletters, 9(1): 2–7. (This
    APA Newsletter is available online; see the Other Internet
    Resources.)
  • –––, 2010, “Programming Languages as
    Mathematical Theories”, in J. Vallverdú (ed.),
    Thinking Machines and the Philosophy of Computer Science: Concepts
    and Principles
    , Hershey, PA: IGI Global, pp. 66–82.
  • –––, 2011, “Specification”,
    Minds and Machines, 21(2): 135–152.
    doi:10.1007/s11023-011-9239-x
  • –––, 2012, “Machines”, in H. Zenil
    (ed.), A Computable Universe: Understanding and Exploring Nature
    as Computation
    , London: World Scientific Publishing
    Company/Imperial College Press, pp. 63–76.
  • –––, 2014, “Programming Languages as
    Technical Artefacts”, Philosophy and Technology, 27(3):
    377–397; first published online 2013.
    doi:10.1007/s13347–012–0098-z
  • Tymoczko, Thomas, 1979, “The Four Color Problem and Its
    Philosophical Significance”, The Journal of Philosophy,
    76(2): 57–83. doi:10.2307/2025976
  • –––, 1980, “Computers, Proofs and
    Mathematicians: A Philosophical Investigation of the Four-Color
    Proof”, Mathematics Magazine, 53(3):
    131–138.
  • Van Fraassen, Bas C., 1980, The Scientific Image, Oxford:
    Oxford University Press. doi:10.1093/0198244274.001.0001
  • –––, 1989, Laws and Symmetry, Oxford:
    Oxford University Press. doi:10.1093/0198248601.001.0001
  • Van Leeuwen, Jan (ed.), 1990, Handbook of Theoretical Computer
    Science. Volume B: Formal Models and Semantics
    , Amsterdam:
    Elsevier and Cambridge, MA: MIT Press.
  • Vermaas, Pieter E. & Wybo Houkes, 2003, “Ascribing
    Functions to Technical Artifacts: A Challenge to Etiological Accounts
    of Function”, British Journal of the Philosophy of
    Science
    , 54: 261–289.
    [Vermaas and Houkes 2003 available online]
  • Vliet, Hans van, 2008, Software Engineering: Principles and
    Practice
    , 3rd edition, Hoboken, NJ: Wiley. (First edition,
    1993)
  • Wang, Hao, 1974, From Mathematics to Philosophy, London:
    Routledge, Kegan & Paul.
  • Wegner, Peter, 1976, “Research Paradigms in Computer
    Science”, in Proceedings of the 2nd international Conference
    on Software Engineering
    , Los Alamitos, CA: IEEE Computer Society
    Press, pp. 322–330.
  • White, Graham, 2003, “The Philosophy of Computer
    Languages”, in Luciano Floridi (ed.), The Blackwell Guide to
    the Philosophy of Computing and Information
    , Malden:
    Wiley-Blackwell, pp. 318–326.
    doi:10.1111/b.9780631229193.2003.00020.x
  • Wiener, Norbert, 1948, Cybernetics: Control and Communication
    in the Animal and the Machine
    , New York: Wiley & Sons.
  • –––, 1964, God and Golem, Inc.: A Comment on
    Certain Points Where Cybernetics Impinges on Religion
    , Cambridge,
    MA: MIT press.
  • Wittgenstein, Ludwig, 1953 [2001], Philosophical
    Investigations
    , translated by G.E.M. Anscombe, 3rd Edition,
    Oxford: Blackwell Publishing.
  • –––, 1956 [1978], Remarks of the Foundations
    of Mathematics
    , G.H. von Wright, R. Rhees, and G.E.M. Anscombe
    (eds.); translated by G.E.M. Anscombe, revised edition, Oxford: Basil
    Blackwell.
  • –––, 1939 [1975], Wittgenstein’s
    Lectures on the Foundations of Mathematics, Cambridge 1939
    , C.
    Diamond (ed.), Cambridge: Cambridge University Press.
  • Woodcock, Jim & Jim Davies, 1996, Using Z: Specification,
    Refinement, and Proof
    , Englewood Cliffs, NJ: Prentice Hall.
  • Wright, Crispin 1983, Frege’s Conception of Numbers as
    Objects
    , Aberdeen: Aberdeen University Press.

Academic Tools

Other Internet Resources

  • ACM (ed.), 2013,
    ACM Turing Award Lectures.
  • APA Newsletter on Philosophy and Computers, 9(1): Fall 2009.
  • Free Software Foundation, 1996, “Overview of the GNU
    Project”. Retrieved at:
    http://www.gnu.org/gnu/gnu-history.en.html
  • Groklaw, 2011,
    “Software is Mathematics—The Need for Due Diligence”,
    by Po1R.
  • Groklaw, 2012,
    “What Does ‘Software is Mathematics’ Mean?” Part 1
    and
    Part 2,
    by Po1R.
  • Huss, Eric, 1997,
    The C Library Reference Guide,
    Web Monkeys, University of Illinois at Urbana-Champaign.
  • Rapaport, William J. (ed.), 2010,
    “Can Programs be Copyrighted or Patented?”.
    Web site (last updated 21 March 2010), Philosophy of Computer
    Science, University at Buffalo, State University of New York.
  • Rapaport, William J., 2016, “Philosophy of Computer
    Science”. DRAFT © 2004–2016 by William J. Rapaport.
    Available at
    Philosophy of Computer Science,
    manuscript.
  • Software Engineering Code of Ethics and Professional Practice
  • Smith, Barry, 2012,
    “Logic and Formal Ontology”.
    A revised version of the paper which appeared in J. N. Mohanty and W.
    McKenna (eds), 1989, Husserl’s Phenomenology: A
    Textbook
    , Lanham: University Press of America.
  • Turner, Ray and Amon Eden, 2011, “The Philosophy of Computer
    Science”, Stanford Encyclopedia of Philosophy (Winter
    2011 Edition), Edward N. Zalta (ed.), URL =
    .
    [This was the previous entry on the philosophy of computer science in
    the Stanford Encyclopedia of Philosophy—see the
    version history.]
  • Center for Philosophy of Computer Science

Related Entries

artificial intelligence: logic and |
computability and complexity |
computation: in physical systems |
computational complexity theory |
computer and information ethics |
computing: and moral responsibility |
function: recursive |
information |
information: semantic conceptions of |
mathematics, philosophy of |
property: intellectual |
technology, philosophy of

Follow us