Naive realism

Part of a trilogy
 * the proposed methodology, and how it guides choices in epistemology and ontology design
 * how this becomes codified in BFO formalism such as 'quality'
 * how BFO gets codified in OWL-DL (i.e. ways we might quantify and parameterize over time in DL)

Introduction
Alan is urging me to either document the term "naive realism" or change it. Reading Wikipedia under Philosophical realism leads me to suggest There seems to be a range of definitions of naive realism, covering a spectrum of pejorative to dignified. What I have in mind is mainly the assertion of the existence of objects.
 * moderate realism
 * naturalism or natural realism
 * methodological naturalism (my favorite so far, but a mouthful)

Alan wants to just say "realism", I think, but the position of philosophical realism is just a starting point for the ontology development method the Foundry is proposing, and saying that other approaches are not "realist" is sort of insulting. The important thing is to explain the rejection of explanations that are Platonic or smack of subjectivity or unnecessary psychology. In particular, that "concepts" and "meaning" are out, and even mathematical entities such as numbers and manifolds are suspect.

We need to emphasize method (what we think is likely to work best) rather than ontology (which is something our formal artifacts account for and not necessarily what we "believe").

Getting theories to converge
One method for the reuse and integration of 'knowledge artifacts' (data sets, databases, knowledge bases, ontologies) is to create a single theory (logical system) in which the artifacts can all be reexpressed - that is, the artifacts, which are each provided in some idiosyncratic original form, are interpreted in terms of the theory. Then the artifacts so 'canonicalized' become comparable and one may phrase questions in terms of the same theory.

(By "theory" I do not mean anything pretentious, but merely a logical language adequate for the task of reexpressing some set of artifacts. Generally this would be defined by a vocabulary, grammar, and inference rules.  For example, a theory could be defined by a so-called "ontology" expressed in OWL-DL, in which case the vocabulary would be URIs, the grammar would be OWL's grammar, and the inference rules would be those of DL as informed by the axioms of the ontology.)

This approach does not necessarily stack or scale nicely. If we design theory AB for integrating A and B, and someone else independently designs theory CD for integrating C and D, then we are not necessarily any better off when we go to do an integration of AB and CD - we may have to design yet another theory ABCD for the second-tier integration.

To address this problem, we might institute design principles for such logical systems that help improve the chances that the integration AB + CD will succeed with minimal effort. Almost any set of rules that channels theory design in a convergent direction, such that two parties conforming to the rules will tend to make the same or similar theories when posed with a theory design problem, will be helpful.

Naive realism
(Not necessarily the best term to use for this idea, but I have none better to propose.)

We are considering "naive realism" as a design principle for theories related to biology, biomedicine, and perhaps other disciplines (although perhaps not all). Theories expressed in first-order logic (of which OWL-DL is a fragment) involve expressions denoting individuals, classes, and relations. The naive realism approach is to restrict the use of individuals to entities that exist in time and/or space - they have physical extent. Any entity not so anchored must be reflected in the theory as a class or relation.

For example, in naive realism the color "red" would not be an individual, but rather a class. The members of the class would be particular reds of particular things existing in space and enduring through time. For example, the red of the northbound traffic light that was standing at Highland and Gray in Arlington MA in November 2009 would be an individual, while the red of similar traffic lights in Arlington would be a class that includes it.

This may seem artificial but (TBD). The discipline forces one in the direction of testable assertions (relations and classes that are objective) and away from vague categories such as "concept". (cite Barry 'concept' paper)

An exception is made for information artifacts, which according to the above principle should be "qualities" (like red - they are borne by particular physical things).

(TBD: Relate to BFO; cite papers by Barry, Solbrig, etc. etc.)