May 2003 Archives

The open-source Internet?

| Permalink

In some of my previous entries I’ve suggested that the actor-network theory and methodology can be used as a mode of explanation in elaborating the interplay between social structures and information (and IT in general). The factor ‘openness’ emerges as the main ingredient in the elaboration when using actor-network theory to explain how actors in a given topology can affect other actors, and also at the same time being affected by them.

The explanatory power of the actor-network methodology relies on the fact that in the same topology both human and non-human actors (elements, structures, processes, etc.) are treated as equally able to affect and influence each other. The affect is carried via the links between the various actors attempting to inscribe their attributes and properties into other actors with congruent properties and attributes (see: Translation).

So, is the Internet open-source?
Or, a more appropriate question would be: is it possible to produce an open communication medium such as the Internet without the open-source software?

Basing this argument on the actor-network theory and methodology and the openness factor, had the software that was used to build the Internet been a closed source software hidden from outside scrutiny, the resulting product, the Internet (whether we see the Internet as a mass medium, a publishing phenomenon, a set of communication tools, etc.) would not have been as open as we see it today. Why?

To use the actor-network language and the openness factor, the closed-source software is almost totally closed in both aspects: its content and its communication. With a closed content (i.e. the code) it is much harder to build compatible and interoperable software tools and much harder to make people use it. Modification to the closed-source software is limited to a very small group of people whose agenda is driven by the bottom line: profit. This suggests that the not so open content and not so open communication about the content is indeed a stagnating force in the exchange of ideas, thoughts and opinions, and innovation in general.

The open content and open communication concepts (with their attributes and properties) are indeed positively responsible for the openness of the Internet. Whether the open-source software is directly responsible for the openness of the Internet, or both the open source software and the Internet openness are both results of the open source philosophy is not very important.

In any case, the open content and open communication concepts have inscribed their properties and attributed onto the openness of the Internet (with varying degrees depending on the various form and flavors the Internet is being used) and also onto the open source software.

Having defined Structuration as the "process by which social structures (whatever their source) are produced and reproduced in social life" (p. 128), Desanctis presents the Adaptive Structuration Theory (AST) as mechanism to examine the change process in a given organization by looking at the type of structures provided by advanced technologies (inherent structures), and the structures that actually emerge in human actions as people interact with these technologies (Desanctis, p. 121). The AST appears to be an appropriate and a natural fit for analyzing the utilization and appropriation of new technologies in social environments. Desanctis develops AST in relation to information technology, stating that "AST provides a model that describes the interplay between advanced information technology, social structures, and human interactions." (p. 125). However, the theory can assert itself in a broader scope, as it lays down some interesting propositions that could be applied to other technologies, perhaps safely extending its scope to innovations in general. The multitudes of innovations in human societies are not independent and isolated; rather, all innovations are interleaved in one way or another with information exchange. The AST could be used to analyze the advent of various innovations such as the printed press, electricity, telegraph, mass transpirations, radio, telephone, TV, the Internet, etc., and show how the structures of these innovations penetrated the respective societies, influencing them, and how the social structures of those societies in turn influenced and modified innovations' original intent. I will come back to this point later.

I concur with Desanctis that the decision-making and the institutional schools are not an appropriate explanation and mode of analysis if taken independently from each other. Technology's or society's impacts can't be unidirectional and isolated from their surroundings: "P2. Use of AIT structures may vary depending on the task, environment, and other contingencies that offer alternative sources of social structures" (Desanctis, p. 128). The actual process of innovation is based on social interaction. As such, new technologies come to light due to changes necessitated by some organizational and institutional forces, and the society.

It is an indisputable fact that managers must communicate with their employees, peers and superiors in order to motivate and lead their employees, to learn about the operating environments and make successful decisions, as well act in their role as figureheads, monitors, spokespersons, disseminators of information and facilitators (Trevino 1987, p. 71). In doing so, managers make conscious and unconscious decisions in choosing the ‘appropriate’ media (face-to-face, telephone, vide conferencing, audio conferencing, electronic mail, letter, memo, special reports, fliers and bulletins) for the communication task at hand. Trevino, basing her argument in the symbolic interactionist perspective, on the rich-lean media scale, implies that message equivocality, contextual determinants and symbolic cues conveyed by the medium itself above and beyond the literal message, determine manager’s media choice for particular communication task (p. 74). Alavi (2001) suggests that media choice directly influences social presence and task participation. This influence is lower in the conditions of established group than the zero-history groups (p. 375).

Many aspects of media choice for mediated-communication for use in workplace environments have evolved and developed since Trevino’s article was published in 1987. Similarly, the situational perception of various media has also changed and has been accepted by employees in organizations expected to utilize the media for work related activities. A suggested course of study could analyze the acceptance and the shifting of social presence of these media over time. I would like to argue that as people become more familiar, the same media (with the same technical characteristics) could be used for more equivocal interactions, even to the point where a medium perceived to be lean and a medium perceived to be rich cab be used interchangeably for the same equivocal message. This argument seems to be partially supported by Alavi when she suggests that a particular medium has different impact on established group vs. zero-history group conditions.

In defining the exosomatic memory concept, Newby quotes Brookes as saying that "An exosomatic memory system is a computerized system that operates as an extension to human memory. Ideally, use of an exosomatic memory system would be transparent, so that finding information would seem the same as remembering it to the human user" (Newby, p.1028).

Brookes's profound statement seems to have strongly influenced Newby into encapsulating the analysis in his article in terms of similarity and consistency between the cognitive space and the information space: "In exosomatic memory systems, the information space of systems will be consistent with the cognitive space of their human users" (Newby, p. 1026). Such emphasis on similarity and consistency seems to come from the fact that Brookes talks of exosomatic memory systems as extensions of the human memory. In explanation of the value and the use of information systems, Newby suggests that "to improve the utility of the information systems, we would like to identify representation schemes for data sets that are consistent with human perception of those data sets" (p.1030). Is it necessary for the information space to be similar and/or consistent with the cognitive space?

The factor 'openness'

| Permalink | 4 TrackBacks

In properties and attributes: links, actors, topologies it has been suggested that the properties and attributes can be intrinsic and external.

The intrinsic properties are those that are inhibited as part of the process of the construction of an actor. For human actors these would be those properties and attributes (physical and mental) that do not change as part of the context and environment, i.e. context independent. The external properties and attributes are those that constantly change due to surrounding and environment, i.e. context dependent. Thus, the intrinsic and the external properties and attributes are not necessarily the same for all humans. However, there could be some that are common depending on the contextual situatedness of the human actor.

For non-human actors, lets take as an example an information system used in a given organizational setting. The information system comes predefined with certain functionality. Some of that functionality (usually referred as the core functionality) is not readily modifiable; it is this functionality that defines the system - its spirit, if changed than the nature of the information system has changed. Then, some other functionalities of the information system are intentionally modifiable to 'fit' the changing needs of the group/department/task that will use this particular information system. The modifications to these functionalities do not change the core nature of the system.

Further, in Translation in actor-network it is stated that "An issue of congruence and correspondence arises from the above discussion, for we can't compare an apple to an orange. In addition, no matter how actors are linked to one another, some actors just don’t get affected by the actors in the corresponding network topologies." So, for a translation to occur, i.e. the properties and attributes of one actor to be transferred and inscribed into another there must be some congruent properties and attributes.

Looking at the intrinsic and the external properties and attributes and their ability to change, content (the "what" is changed) and communication (the "means" by which the change is instigated) emerge as the congruent properties and attributes actors different actors (and definitely between human and non-human) prone to being modified and able to modify other actors through links in a given topology.

From the above it appears that an actor with its links in a relevant topology can perform upon other actors and links and be performed by other actors and links within the relevant and pertinent topology. The external content properties and attributes are those prone to being modified via the link (that in turn could also be performed upon and perform upon others). The basic properties and attributes of the links is their communicative openness: one-way link or two way links.

Also, the modifiable content depending on the intrinsic and external properties can be described and manifests itself in various degrees of openness. Similarly, the communication links vary in degree of their communicative properties via which the properties and the attributes of the actors are transferred and translated into other actors via inscription.

Cramton’s article identifies and analyses a multitude of problems constituting failures in the process of establishing and maintaining mutual knowledge (failure to communication and retain contextual information, unevenly distributed information, difficulty communication and understanding the salience of information, differences in speed to information access, and difficulty to interpret the meaning of silence), as well as few mechanisms for establishing and maintaining mutual knowledge (direct knowledge, interactional dynamics, and category membership). Both, the problems constituting failures and the mechanisms for establishing mutual knowledge have helped me explain team members’ behavior of project teams (dispersed and collocated) that I have been involved in the past, and appear to be good candidates for analyzing my involvement in future projects in workplace.

The definition of mutual knowledge as “the knowledge that communicating parties share in common and know they share” (Cramton 2001, p. 346) is an appropriate assumption base on various cultural, anthropological and communication studies, as well as from our everyday experience that we exchange information with others having in mind the contextual and situational background that help us understand and interpret each other. Only with common/shared understanding where interpretation and the meaning making process is compatible can we understand each other and actually communicate. In relation to organizational settings, the failure to establish and maintaining mutual knowledge has negative affects on dispersed team’s decision quality, productivity and relationships (p. 349).

Social Shaping of ICTs and Evaluation

| Permalink | 2 TrackBacks

Kling’s article addresses interesting issues in relation to how computing has effected social structures, both institutional (corporate and non-corporate) and public, and also how the underlying social structures have influenced computing. The article ought to be read in light of the fact that it was published in 1980 and that it is a meta-analysis. It examines various studies and research that have analyzed computing and computers from 1950-1979. Besides, we need to be mindful that the notion of computing and computers prior to 1980 was somewhat different that the way we perceive it today. Considering that there were 200,000 computers in use in US (Kling, p. 63), it gives us roughly one computer per one thousand people, with rough estimate of 200 million people living in the US.

In addition, the pervasiveness of computing technology of pre 1980 was very low compared to today. At that time, computers were mostly expensive central mainframes used by corporations, institutions and the government agencies, with terminal access only, and used strictly for business. The concept of personal computer as we know it today was only idea for the future. So, the actual ‘use’ of computers was perhaps few magnitudes lower than one thousand users per computer. Many users were only secondary users of computer functions/services usually via intermediary, such as police officers in the field during their work hours checking on police records via dispatchers. Further, the computer technology in pre 1980 was primarily used as data processing aid for cranking reports, statistical analysis and efficient and accurate reporting. This mechanical viewpoint of computers reinforces the idea that computers are like any other resource at manager’s disposal to be used for the goals of the institution and corporations, regardless if used for innovation, work, life, decision making or organization power.

“The new institutionalism in organization theory tends to focus on a broad but finite slice of sociology’s institutional cornucopia: organizational structures and processes that are industry wide, national or international scope” (Powell et al, p.9)

“Institutionalized arrangements are reproduced because individuals often cannot even conceive of appropriate alternatives (or because they regard as unrealistic the alternatives they can imagine. Institutions do not just constraint options: they establish the very criteria by which people discover their preferences. In other words, some of the most important sunk costs are cognitive” (Powell et al, p.11)

Starting from the premises of new institutionalism with its scope, constraints and criteria establishment, Orlikowski and Barley (2001) proceed to elaborate that information technology (IT) research and organization studies (OS) have much more in common than what has been already presented in scholarly communication and practice in both areas of study.

Considering that IT research is mostly practical in nature dealing with the design, deployment, and use of artifacts that represent tangible solutions to real-world problems (Orlikowski et al, p.146), and OS is theoretical as it develops and test parsimonious explanations for broad classes of phenomena (p.147), and that "organization studies (OS) and information technology (IT) are disciplines dedicated respectively to studying the social and technical aspects of organization" (p.146), they posit the differences between IT research and OS as epistemological in nature and not in the subject matter, treating the issues of organization at different level, emphasizing on the particular and the general respectively: "There can be no general knowing that is not somehow grounded in particulars and no particular explanation without some general perspective. Particulars are important for theory building, and theory is important for making sense of the specific" (p.147)

In Open source on hold in Oregon the Business Software Alliance, a software industry representative, claims that the Oregon state legislature encouraging the state institutions to consider open source, will "squelch software innovation, does not take into account hidden costs such as maintenance of open-source software and might actually harm the high-tech industry in Oregon."

The claim that open source with squelch innovation has been use by proprietary software makers without any viable argument or any research. If nothing more, the history has shown that open source has been a source of major innovations surrounding the Internet and beyond.

In addition, it is puzzling at best to understand why are the proprietary software makers against such legislature encouraging the consideration to use open source and not in any way mandating it?

The other point made in this article by the proprietary software makes it that such legislature "might actually harm the high-tech industry in Oregon."

Well, the proprietary software makers should realize that the competition in the real world of software development now includes a factor that was previously absent. Why not let the 'market forces' decide whether open source should be used by the various government institutions?

Among other concerns, it is precisely the issue of cost (relevant to market forces) that the Oregon bill is addressing: "Rep. Phil Barnhart, the bill's author, claimed the law is necessary to help agencies cut costs, to enable better interoperability among IT systems and to increase opportunities for Oregon's high-tech companies and workers."

It appears that the proprietary software makers’ lobbying efforts to block the use open source software in themselves are hurting innovation in software development by trying to remove from the ‘market’ a real competition.

Actor-Network Theory and Managing Knowledge

| Permalink | 1 TrackBack

Whether one utilizes and appropriates the Actor-Network Theory paradoxically not as a theory but as methodological approach for ethnomethodology, or ANT as an actual theory in the true sense of a parsimonious theory with the classical philosophical understanding and ability to predict (i.e. cause-effect relationship) phenomena around us, two properties are common and fundamentally critical to any color, flavor or form ANT might have emerged and evolved into: inscription and translation, with their ability to act at distance.

The distinction of Actor-Network Theory from ANT is not only semantic in nature since “ANT” is not just an acronym for Actor-Network Theory. Going from Actor-Network Theory into ANT, the concepts, ideas and thoughts of the original inscription of the Actor-Network Theory performed and were perform upon in the web of scholarly discourse, thus translating themselves into self sustainable quasi theories. If actor-network theory was not reduced to ANT, perhaps it would not have been possible to become as pervasive as it has become but not without being translated, transformed and performed. This distinction is evident from Law’s and Latour’s statement in Law and Hassard (1999). In expressing his wishful thinking to recall ANT back to its origins, Latour, one of the original authors that laid down the principles of what has became ANT, states:

The translation process enables an actor/entity (simple or complex) to inscribe its properties and attributes onto other actors in the pertinent topologies. This suggests that there is a movement of some sort from one actor to another. Certainly, in any given topology not all actors are able to inscribe their properties and attributes equality into other actors. Some properties and attributes are more prevalent in any given topology. What determines the strength of the attributes and the properties?

As with all things in our lives, some things are prone to changes more than others. For example, when a new information system is brought into an organization, the appropriation process might modify the information system to a great degree to fit its needs. At other times, the organizational structure or tasks might change as a result of the appropriation of a system that does not allow much modification to its pre-defined functionality.

It appears that properties and attributes of actors can be grouped in at least two groups: 1) intrinsic properties and attributes - those that are not modifiable as a result of links to other actors, 2) external properties - those that have been acquired and appropriated through the modification/translation process and are further modifiable.

Translation in actor-network

why actor-network?

Translation in actor-network

| Permalink | 6 TrackBacks

Actors and their links with which they are connected to each other to construct/produce a topology with a given boundary are the basic building blocks in the actor-network mode of explanation. (re: Why actor-network?)

So, how do the actors in a particular topology influence each other? This is done through their links. The actor-network theory suggests that a process of translation takes place, a process that explains how and why some actors take the attributes and properties of the actors they are connected too. Thus, certain properties of one actor are transferred to other actors through their mutual links. The question arises then as to what/which properties and attributes of an actor can be transferred onto another and initiate a process of translation onto the actor it is connected too? Further, what is the role of the properties and attributes of the links in the process of translation/transfer? Which properties and attributes of the links are important to this process?

An issue of congruence and correspondence arises from the above discussion, for we can't compare an apple to an orange. In addition, no matter how actors are linked to one another, some actors just don’t get affected by the actors in the corresponding network topologies.

why actor-network?

| Permalink | 5 TrackBacks

In Social constructionism vs. technological determinism it has been suggested that the actor-network theory and its methodological framework may provide the language and the mode of explanation to elaborate in a common framework the interplay between human and non-human entities.

Most importantly, the major contribution of the actor-network theory seems to be the fact that it treats the human and non-human elements (or actors as the various element in a given topology are named in the actor-network language) alike as being able to influence each other.

For example, a network topology representing a department in a given organization may consist of various human and non-human actors such as employees, manager(s), inter and intra-departmental structures, communication channels, forms of communication, information and communication systems, meetings, tasks, routines, etc. All of these actors are linked to each other via links (single or multiple).

So, what next? Well, if actors are linked to each other they can potentially influence each other. For example, given the departmental structure, the manager has a direct link/communication with the employees and in many cases affects how the employees do their job. At the same time the employees may affect how the manager does his/her job regarding a particular project. However, the influence that the manager can exert on the employees perhaps is stronger than the influence any particular employee might be able to exert on his/her manager. Here we see an example of the actor 'structure' as a moderating actor in the communication/link between the manager and the employees.

Another example would be the use of a particular information system for performing certain project related tasks. If a particular system is already being used for given tasks, some limiting capabilities of the system when used for a similar task will effect how the task is performed by the employees. When cost becomes an issue (we can't always have the systems changed the way we want), the functionalities of a particular system might even define the departmental structure and the scope of the task. Here we see an example of an information technology actor/artifact having a say on how tasks are performed.

If actors in a given topology can effect each other, what are then the properties and the attributes of the actors and the links then can further help us elaborate and explain the nature of a particular topology?

By Mentor Cana, PhD
more info at LinkedIn
email: mcana {[at]} kmentor {[dot]} com

About this Archive

This page is an archive of entries from May 2003 listed from newest to oldest.

April 2003 is the previous archive.

June 2003 is the next archive.

Find recent content on the main index or look in the archives to find all content.