Reductionism could be viewed as an essential theoretical component of modern natural sciences such as biology or genetics. Although reductionist approaches work very well as long as “natural” phenomena are studied, things become more difficult when human beings are involved as the object of study. Let us try and see how this notion of reductionism could be briefly defined, and then try to apply this definition to software engineering.
Reductionism could be understood as the process through which the observed behaviour of a complex system is defined and explained as the product of the operation (or interaction) of its identified components. In neurobiology for instance, a nice example could be the way the notion of “addiction” is tackled from a neurobiological/behaviouristic point of view. A particular behaviour or phenomenon, such as for example the act of taking drugs, is observed and defined, and explanations to this phenomenon are sought on a psychological (cognitive, behaviouristic) level, and further down on a neurological level and a molecular level. Technological advances such as fMRI, or PET scan now allow us to be able to observe changes in the brain as they happen during the display of a particular behaviour, allowing researchers to establish causal relationships between the observation of a particular behaviour and what happens in the brain of the person being observed.
However, the problem with such an approach is the fact that things can get quite complicated as soon as a human observes the behaviour of another human. Poststructuralist theory for example emphasises the multiplicity and historical, cultural and social contingency of human perspectives: As the way we make sense of the world is shaped by culture and our own set of values, the meaning of notions such as for instance “addiction” becomes slippery.
So let us try to apply all this to software engineering. What happens if we now take interactive systems as our object of study? To be quick, an interactive system could be described as made up of a combination of software and hardware, both of which can perform a number of basic operations. These operations are combined in a certain way so as to provide particular functionalities to the system’s users. Can these functionalities be purely described as the particular combination of these basic operations? As I’ve tried to argue in previous posts, it could be useful to consider the meaning of the behaviour of an interactive system as emerging in the framework of its interaction with its users. It is therefore not fixed in the particular combination of its ‘logical’ elements, but is dynamically created through the actual use of the system by its users. This is where a rift between the meaning of the system for its designers, who focus on the combination of the logical elements composing the system and this meaning for the user, which is dependant on his or her own system of representations. Can reductionism therefore provide a conceptual tool for better understanding the gap that exists between system designers and their users?
Related posts:
- Intertextuality and User interfaces as Relational Systems of Representations
- Interactionism as a Framework for Understanding User Interaction
- Research proposal : A Foucauldian analysis of the evolution of the discourse about software development methodologies
- A short and biased history of software development methodologies
- Software as Discourse
Dear Pascal:
Greeting from Taiwan. This is Jyhjeng Deng who met you on the dining Table at the 12th ICIES conference at Funchal. I looked at your note in this blog and it is very interesting. I encourage you to explore more on this matter and wish you can present even more insightful reflection on the reductionism and software engineering.
Cheers,
J Deng
Hi Jyhjeng,
Thank you for your encouragements! Hopefully soon I will be able to explore these themes a bit more in the shape of ‘real’ academic papers!
Best Regards,
Pascal
In my experience, grounded theory methods are a reductionist approach to elicit social construction. Its approach towards ‘coding’ is based on identifying both ‘a priory’ and ‘substantive’ codes derived through interviews and/or focus groups. I’m not sure it’s therefore inductive, more semi-deductive. I’ve read several studies that have started out with a pluralist attempt only to conclude that a pragmatic and thematic framework is sufficient. The literature also includes several studies that avoid defining a ‘core category’ and derive a hypo-deductive conclusion that could easily have been the basis of a naive proposition or ‘common knowledge’. Grounded theory concept elicitation is dependent on social construction and the analyst is forced to introduce his own personal constructs through ‘coding’ of the data. If tacit knowledge ‘is’ responsible for infecting the authenticity of the data, then that’s an inherent problem of the method in the use of ‘selective coding’ and an abstraction process which attempts to normalise the data across several case studies. Your dialectic assessment appears balanced, and shows how cross discipline research assists theory into practice. It may also serve to reduce the pretentious and ambiguous nature of monological forms of reasoning which prefer to create ambiguity in meaning through discourse and linguistics, usually when the opportunity for conducting substantive research dries up!
The best way to reduce the gap that exists between system designers and users is test driven development. Quite simply, design and build a model, see how people use it, and then amend your original design / model; repeat until it works well.
I’m not sure reductionism or philosophical notions has much to do with it at all.
Indeed, you could just as easily have argued the complete opposite: “Holism and Software Engineering” and pretty much come up with the same load of nonsense.
Hi Dilbert (is that your real name?),
Thank you for your comment, which I found quite refreshing! Although I would prefer to define the approach you’re describing in a more general way as maybe ‘iterative software design’, I agree with you when you say that test-driven development is the best way to handle most software development projects. It is the approach I came to favour for the projects I’ve been involved in.
However, the issue I tried to address in this article (and more generally in this blog) is the following: how do we go about designing this ‘model’ you’re describing in your comment, and what are the things that can make this process particularly tricky? I think that this is where the developer’s understanding of how the system should behave becomes central, and relies on his or her interpretation of how users would, will and do use the system. I think that one of the mistakes that can be made in that regards is to see the system to be designed as the collection of a certain number of ‘functionalities’, in contrast to something sometimes much more complicated involving objects, the relationships between these, and their representation and manipulation through the system.
Again, this is where I would agree with you when you say that an iterative design approach is the best way to achieve an appropriate system. But the question is, how can we make sure that we design a ‘good enough’ initial model, and that we evaluate that it evolves appropriately to users’ and domain experts’ expectations? I’ve set up this blog to try and answer such questions by making use of concepts borrowed from certain traditions in the social sciences (or in the philosophy of science), as I think this particular body of knowledge contains a lot of interesting concepts which could be applied to software design: in short, I’m interested in exploring the notion of ‘software as discourse’ and of the implications of such an approach in terms of design methodology.
I would encourage you to read more of the articles featured in this blog, maybe you’ll find stuff that doesn’t look as much like a ‘load of nonsense’ to you, and I look forward to your reply!
Best Regards,
Pascal
I think the simple answer to : How do we develop a good enough initial model is… through experience & education. For example, if we hold all other variables constant (e.g., intelligence, common sense, creativity, background, etc.) a person educated to design a web page would probably tend to do better than someone with no training. However, a person with a lot of experience using web pages might perform better than someone who is only educated. But a person with education and a lot of experience will probably do the best.
If, instead, you ask how does one build a model with no education or experience…
My answer would be…you wouldn’t…as you would have no basis for even understanding the problem. Even those who build things ‘for the first time’ as it were… the great philosophers and scientists… did get their ideas from something, someone, or somewhere. And many of their initial ideas were rubbish. Again, only iterative design worked…even for them.
You can’t get something from nothing as they say.