Recently in Social Construction Category

Done with my dissertation

| Permalink

This past Tuesday I passed my dissertation defense and presented my public dissertation this afternoon. Yes, I'm all done! :) Hopefully now I will have more time to keep writing here.

Dissertation title:
Open Access Repositories in the Cultural Configuration of Disciplines:
Applying Actor-Network Theory to Knowledge Production by Astronomers and Philosophers of Science

How to smash a home computer

| Permalink

How to smash a home computer:

This is just funny! It is very revealing though, despite the problems with IT technology, it shows that human actions and social contexts are the main culprits for data loss.

presenting at ASIS&T 2004

| Permalink

Whoever is reading this, just to let you know that I will be presenting at the Annual ASIST&T Conference "ASIST 2004 Annual Meeting; "Managing and Enhancing Information: Cultures and Conflicts" (ASIST AM 04), " in Providence, RI, on November 16th, 2004, at 5:30p-7:00p.

As a part of a panel titled Diffusion of Knowledge in the Field of Digital Library Development: How is the Field Shaped by Visionaries, Engineers, and Pragmatists?, I’ll be “theorizing on the implication of open source software in the development of digital libraries”.

Will you be there?

Panel Abstract:
“Digital library development is a field moving from diversity and experimentation to isomorphism and homogenization. As yet characterized by a high degree of uncertainty and new entrants in the field, who serve as sources of innovation and variation, they are seeking to overcome the liability of newness by imitating established practices. The intention of this panel is to use this general framework, to comment on the channels for diffusion of knowledge, especially technology, in the area of digital library development. It will examine how different communities of practice are involved in shaping the process and networks for diffusion of knowledge within and among these communities, and aspects of digital library development in an emerging area of institutional operation in the existing library institutions and the specialty of digital librarianship. Within a general framework of the sociology of culture, the panelists will focus on the following broader issues including the engagement of scholarly networks and the cultures of computer science and library and information science fields in the development process and innovation in the field; involvement of the marketplace; institutional resistance and change; the emerging standards and standards work; the channels of transmission from theory to application; and, what 'commons' exist for the practitioners and those engaged with the theoretical and technology development field. The panelists will reflect on these processes through an empirical study of the diffusion of knowledge, theorizing on the implication of open source software in the development of digital libraries, and the standardization of institutional processes through the effect of metadata and Open Archive Initiative adoption.

The panel is sponsored by SIG/HFIS and SIG/DL”

technology doesn't make moral choices, humans do

| Permalink

From Judges leave technology's moral choices to humans:

The court's decision doesn't condone the theft of copyrighted material. That is wrong and will always remain so. Peer-to- peer networks have other uses, however, particularly for the many lesser-known bands, artists and filmmakers that embrace file-sharing for its distribution power.

The court's ruling rightfully recognizes that technology doesn't make moral choices, humans do.

Fewer students major in tech reports on the declining number of students entering and graduating in IT related degrees, including here information science/studies.

"In the University of Pittsburgh's information science program, which combines the study of information technology and how people use it, the number of students majoring has dropped to 200 for this school year, said Bob Perkoski, IS undergraduate program director. Last year, 229 students were majoring in IS and the year before, 260, Mr. Perkoski said."

It is interesting to see the effect of the declining graduates in the field of information science/studies with the ever increasing utilization of information technology around us. This isn't to say that information science/studies professionals are the only graduates/experts that can elucidate the interplay of IT and IS and the social structures within which they are embedded. However, who else is better positioned to study and explicate these relations? Computers science/engineering graduates traditionally have concentrated more on the technology rather than its social significance and implications. On the other side, social sciences do not emphasize enough on the technology as an important determining actor in the complex web of socio-technological interconnections.

Nevertheless, the decline might not have any immediate effects in real life due to the fact that in practice it is rarely recognized that information science/studies graduates are the best positioned to deal with the interplay of IT/IS and the relevant social structures.

paper superior to digital technology for archiving

| Permalink

From "Digital Information Will Never Survive by Accident”:

"Beagrie: In the right conditions papyrus or paper can survive by accident or through benign neglect for centuries or in the case of the Dead Sea Scrolls for thousands of years. It takes hundreds of years for languages and handwriting to evolve to the point where only a few specialists can read them.
In contrast, digital information will never survive and remain accessible by accident: it requires ongoing active management. The information and the ability to read it can be lost in a few years. Storage media such as paper tape, floppy disks, CD-ROM, DVD evolve and fall out of use rapidly. Digital storage media have relatively short archival life-spans compared to other media. As the volumes, heterogeneity, and complexity of digital information grows this requirement for active management becomes more challenging and more critical to a wider range of organisations."

I already have a problem reading/opening some papers/files that I wrote during my undergrad studies using WordStar (or something similar) in a school computer lab.

the social construction of Unix, C, and Linux

| Permalink

From Unix's founding fathers:

"It is that interplay between the technical and the social that gives both C and Unix their legendary status. Programmers love them because they are powerful, and they are powerful because programmers love them. David Gelernter, a computer scientist at Yale, perhaps put it best when he said, “Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defence against complexity.” Dr Ritchie's creations are indeed beautiful examples of that most modern of art forms."

My emphasis in bold; couldn't have said it better. After all, we knew that coders and programmers are not "lone scientists". :)

Alan Kay's food for thought regarding personal computing

| Permalink

Alan Kay's food for thought as reported in A PC Pioneer Decries the State of Computing, regarding personal computing:

But I was struck most by how much he thinks we haven't yet done. "We're running on fumes technologically today," he says. "The sad truth is that 20 years or so of commercialization have almost completely missed the point of what personal computing is about."

But what about all those great things he invented? Aren't we getting any mileage from all that? Not nearly enough, Kay believes. For him, computers should be tools for creativity and learning, and they are falling short. At Xerox PARC the aim of much of Kay's research was to develop systems to aid in education. But business, instead, has been the primary user of personal computers since their invention. And business, he says, "is basically not interested in creative uses for computers."

Note the emphasis that computers could/should have been used more for creative process and learning. The potential is there, however, the social construction of the computing technologies has been mostly lead by commercial goals. Thus, the interplay of computing technology and social structures has mostly served commercial interest and less so with the potential of creativity, inventions and innovation.

The question arises then how to get to more creative use of technology for learning and novel ways of innovations? Open source computing perhaps, where computing tools geared more towards learning that act as stimuli for creative innovation. But then, anything creative that can make money is imprisoned within the commercial realm and looses it potential for learning and creativity. A way needs to be found such that creativity is left to bloom within its realm free from commercialization. Proprietary software (due to being in closed environment) is responsible for slowing down innovation and creativity. I would say: the way is towards open computing …

The Role of Children in the Design of New Technology

| Permalink

The Role of Children in the Design of New Technology

Children play games, chat with friends, tell stories, study history or math, and today this can all be done supported by new technologies. From the Internet to multimedia authoring tools, technology is changing the way children live and learn. As these new technologies become ever more critical to our children’s lives, we need to be sure these technologies support children in ways that make sense for them as young learners, explorers, and avid technology users. This may seem of obvious importance, because for almost 20 years the HCI community has pursued new ways to understand users of technology. However, with children as users, it has been difficult to bring them into the design process. Children go to school for most of their days; there are existing power structures, biases, and assumptions between adults and children to get beyond; and children, especially young ones have difficulty in verbalizing their thoughts. For all of these reasons, a child’s role in the design of new technology has historically been minimized. Based upon a survey of the literature and my own research experiences with children, this paper defines a framework for understanding the various roles children can have in the design process, and how these roles can impact technologies that are created.
(Full Paper in PDF)

E-voting: Nightmare or actual democracy?

| Permalink

The public domain discourse surrounding e-voting is very perplexing. Similarly to other articles, E-voting: Nightmare or nirvana? questions the security of e-voting systems and their viability for use in real elections.

"Once the province of a small group of election officials and equipment sellers, e-voting has exploded into the popular consciousness because of a spreading controversy over security and verifiability. Thanks to a concerted effort by opponents and to the missteps of voting machine vendor Diebold Election Systems, most of the news has been bad."

I have said this before in a previous entry (secure enough for consumerism, not good enough for voting?!) and here it is again: How is it that we can't trust e-voting security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?

Secondly, the missteps by Diebold Election Systems that produces e-voting machines are curable by the use of open source e-voting systems that are already in use in other places around the world.

Yes, there are potential problems with e-voting systems. These are the same issues that trouble all new technologies in the appropriation phase by the users. However, to claim that these issues are worse than those that troubled and still trouble e-commerce systems is absurd.

Social Issues Surround Social Software

| Permalink | 2 TrackBacks

From Social Issues Surround Social Software:

"While the answer may be elusive, panelists at the Supernova 2004 conference here agreed that the social dynamics around the use of burgeoning collaboration tools such as online social networking services, Weblogs and wikis are often as important as, if not more important than, the technologies themselves."

I would like to make one corrections to the above quote: it isn't that social dynamics (and social structures) are often as important, they are always as important if not more important. And this isn't true only for social software and collaboration tools, but it is true for all types of interactive information and communication systems, and technology in general. The technology meant to aid people's tasks is meant to be used by people in various contexts. As such, the technology by itself cannot deliver the sought after results. It is the interaction between the technology and the human factors in a given social structures and context, including the properties of the task, that hopefully results in the desired outcomes.

Once and for all we need to get over the irrational idea that social structures, human actions, and tasks can be bent to fit the technology. Yes, they can, but don't expect the desired results...

socio-technological definition of "digital library"

| Permalink

When discussing the subject of digital libraries (DLs), often the very definition and meaning of the phrase "digital library" is questioned. This is expected due to the historical, practical and theoretical development of digital libraries as technologies (computer and information systems) as well as social structures.

Below I provide two definitions by Borgman (1999) and Lesk (1997) that have been widely used by practitioners and researchers. Needles to say both definitions embody the technical and the social nature of digital libraries.

Borgman (1999) attempts to explicate the meaning and interpretation of the phrase "digital library" through the analysis of various definitions regarding "digital libraries" coined by various research and practice communities claming to be somehow related to digital libraries, and to assess and identify possible influences of those definitions in the relevant communities. Borgman identifies two distinct senses in which "digital library" has been used (p. 227). The technological definition stating that "digital libraries are a set of electronic resources and associated technical capabilities for creating, searching and using information" (p. 234), is contrasted by the social view stating that "digital libraries are constructed, collected and organized, by (and for) a community of users, and their functional capabilities support the information needs and uses of that community" (p. 234).

Another workable and widely used definition is provided by Lesk (1997): "Digital libraries are organized collections of digital information. They combine the structuring and gathering of information, which libraries and archives have always done, with the digital representation that computers have made possible" (p. XIX).

References :
Borgman, C. L. (1999). What are digital libraries? Competing visions. Information Processing & Management, 35 (3), 227-243.

Lesk, M. (1997). Practical digital libraries: Books, bytes and bucks. San Francisco, CA: Morgan Kaufmann

At last, there is a realization that information and communication technologies do not necessarily help the 'disadvantaged and vulnerable groups' by the way of some magic. Given that the tools of the economical development in most cases reflect the social structures within which they function, thus 'favoring' the people in 'power', a concentrated effort is needed to ensure that people less likely to 'magically' benefit from such advances do indeed rip the benefit.

The 'Technologies of a Digital World' conference/Expo seems to be an effort in the right direction. At least they are emphasizing that something other than 'magic' needs to be done.

"Technology is an enabler as well as a catalyst to ensure companies operate profitably and governments operate more efficiently in the global environment. But technology should also be the medium for people from all walks of life to harness the new opportunities offered by ICT, and act as fundamental elements for creating new skills and shaping mindsets to churn the engine of the knowledge-economy."
The Expo and Seminar, first of its kind to be held in Brunei, carries the theme, 'Technologies of a Digital World' and is centred on the development of technologies suited to the disadvantaged and vulnerable groups and the development of affordable technologies to facilitate people's access to ICT.

actor-network theory or ANT ?

| Permalink | 2 Comments

One of the major issues with the actor-network methodology is that there is no ready to used steps/procedures on how to go about operationalizing the various actor-network related concepts. Many of the concepts are dispersed amongst the writings by Latour, Callon, Law, Bijker, Akrich, Hassard, and few other authors. One of the most informative sources is the book "Actor Network Theory and After" by Law & Hassard.

As actor-network theory and methodology got translated into ANT, interestingly enough we see here a theory and methodology a subject of its own theorization through the concept of translation and inscription, many researchers have tried their own particular attempts to operationalization of the concepts relevant for their line of inquiry.

The point I'm trying to make is that we have bits and pieces of attempts to operationalize various actor-network related concepts; however, we lack an overall framework. The answer to why is this is pretty much provided in the above-mentioned book in the chapter "On recalling ANT" (by Latour) stating that actor-network was only meant to be a way of doing ethnomethodology and not a theory (p. 19). So, when people talk of ANT it usually means the theorizing of actor-network in various forms and flavors, while actor-network is more of a way for doing ethnomethodology.

Latour makes the argument that the actual acronym ANT is not simply an acronym. BUT, it is a result of the process of translation by the way which actor-network theory and methodology became ANT (with various flavors). So, the process of translation produced multiple ANT-s, each ANT stressing on different concepts as related to the actor-network methodology/theory.

So, as a result it would seem that ANT has different meanings pertinent to the context and the line of inquiry it is used and applied to. The process of translation is given as the reason.

Latour explains this very clearly in the chapter "On recalling ANT".

SEs meaning mediation; suppressing controversy

| Permalink

The idea that search engines (SEs) suppress controversy is indeed real. As it is argued in Do Web search engines suppress controversy?, the suppression is not intentional, however, Google's bottom line means good results and quicker, not necessarily attempting to cover all the sides of the story/issue which an information seeker is trying to find information about.

I've tried to explain the sort of mediating power/role by SEs in earlier blog entry: search engines' meaning mediation power,

secure enough for consumerism, not good enough for voting?!

| Permalink

In the past year or so we have seen various attempts to online voting just to see them scrapped because they are not secure enough. Pentagon Drops Plan To Test Internet Voting is the latest report on such initiative stating that "The Pentagon (news - web sites) has decided to drop a $22 million pilot plan to test Internet voting for 100,000 American military personnel and civilians living overseas after lingering security concerns, officials said yesterday."

How is it that we can't trust security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?

Something is wrong … perhaps the following explains it (from the same article): "The American pullback is in direct contrast to Europe, where governments are pursuing online voting in an attempt to increase participation. The United Kingdom, France, Sweden, Switzerland, Spain, Italy, the Netherlands and Belgium have been testing Internet ballots."

Ref: Media Control: Open communication technologies as actors enabling a shift in the status quo

google's personalized 'jewel'

| Permalink

Google does it again. Like with many of the practical implementations in the search world, Google is first again. First in implementing it in real world, not necessarily in research. As far as research is concerned, personalized searches have been discussed plenty.

This new personalized web search by Google utilizes facet aided searches.

The entire search is dynamic. Once you setup the profile, very simple and menu/directory driven, the left side shows the built query. You can still type a search term. The FAQ shows a bit how things supposed to work.

In any case, the search is operational (beta) and once the relevant docs are returned, there is a small sliding bar that can be moved left-right in order to dynamically relax-restrict the personalization.

Interesting stuff! Just when you think you have learned how Google works! :)

Now, all other search engines would try to do the same. Why don't they start something before Google does it for a change?! What are they afraid off?

(thanks to for the link)

bad scientific/technology journalism or ...

| Permalink

In the Supercomputers Think Fast with New Software article there is no mention of the word 'think', even though it is in the title/subject of the article.

Is this just intentionally bad journalism intended to get people to read the article because they believe computers and thinking are interesting conjectures, or, the journalist really does not know that computers (even supercomputers) can really think but only process information/data.

Talking about social construction of concepts. What goes on in those people's minds who believe computers can think? Do they believe that computers are always right and/or should always be trusted as such?

strange world: Court stops DVD-copying software

| Permalink

From Court stops DVD-copying software:

"A US court has told software company 321 Studios to stop selling a program that lets people copy DVDs."

Hmmm... Where is the logic of this? Why not stop the selling of VCR recorders because they can be used to make illegal copies of movies on video tapes.

Technology isn't the real problem - BUT, it might be

| Permalink

From Technology isn't the real problem:

"A person trapped in the cold can use a cell phone to call a tow truck. Medical advances mean people once doomed are now up and moving. Information - as well as trash and useless drivel - is immediately available on the Internet."
"Technology isn't the issue. The problems and the answers are within our hearts, not in our factories."

But let us not forget that technology can be a problem. For example, if the potentials of the nuclear power were not known during the WWII, there would have been no nuclear device capable of indiscriminate mass destruction.

So, rather then claim that "Technology isn't the real problem" or that humans and the human behavior is not the real problem, we should embrace the reality that BOTH humans and technologies can be problems (together or separately from each other), dependent on the context and its immediate as well as distant environments both in time and space.
[see Social constructionism vs. technological determinism,
technology's performative function - limitations and restrictions,
Technology makes us unwitting slaves - BUT it does not have to be that way]

What we need is wisdom to balance the technological and social forces with the intention to improve the human conditions around the world. What we should be concerned is when technology is used to achieve materialistic goals with no concern for human life and human dignity.

The machine that invents ?!

| Permalink

From The machine that invents:

"His first patent was for a Device for the Autonomous Generation of Useful Information," the official name of the Creativity Machine, Miller said. "His second patent was for the Self-Training Neural Network Object. Patent Number Two was invented by Patent Number One. Think about that. Patent Number Two was invented by Patent Number One!"

Is it really possible for machines to 'invent'? Can the machines really discover anything more then what has been imbedded/inscribed into their design implicitly or explicitly by the human designers. Perhaps it would be wiser to say that machines can discover things quicker due to their enormous computing power. But, discoveries and inventions are two different activities.

is the UN's information society summit doomed to fail?

| Permalink

Why UN's information society summit is doomed to fail provides and interesting analysis about why the UN's information society summit might fail.

Here are the two reasons it provides:

  • The first is the United States' position that profit -- or even the potential for profit -- is more important than the goals of the WSIS.
  • The second reason is procedural. The United Nations prefers to operate by consensus. So as long as any one member of the WSIS objects to a portion of the plan, the plan cannot move forward.

I think that both of these arguments are valid. However, they might not be sustainable over longer period of time. If the Internet is to be one of the driving forces for the economical development of third world economies, it would mean that the corporate grip of the Internet may not be able to survive for to long. Simply said, those affected by the Internet would like to have some say about its operation. As the people effected are not western centric any more, there would be more noises such as those heard at the WSIS.

Whether the UN is the right organization for the worldwide manageability of the Internet only time will show. The WSIS attempt is perhaps just a start. Other ventures will be attempted in the near future. Few things must be ensured though: there should be no censorship on the Internet, its economic potentials should be equally available to all around the world. So, as it appears then, the main problem might not necessarily be with the Internet. Better economies in the third world countries will give them more leverage when the next 'WSIS' comes around.

socio-technological; actor-network theory, open source

| Permalink

I just came across some interesting pieces on the social aspects of open source software and actor-network theory as a tool to investigating the socio-technological attributes of information and information structures around us. Felix Stalder presents challenging thoughts in Open Source as a social principle and Theories of Socio-Technologies.

Caution over 'computerised world'

| Permalink

Caution over 'computerised world'

"The team in Switzerland looked at the health, social and environmental implications of what is called pervasive computing."
"The idea behind pervasive computing is that everything around us contains some sort of electronic device."
"I am not saying I am against technology," he insisted, "but we should be aware there is a price to pay."

Indeed. Technology is here to stay intermingling with humans and other social structures. We need to be cautions and careful not to implement technologies that are restrictive and controlling. Instead, a drive towards technologies of openness should be made.

making sense of information

| Permalink

In To much information Nathan Cochrane makes a good point that despite the multitude of tools at our disposal that manage and manipulate information we have not necessarily become more informed decision makers. Perhaps the evens and issues we need to make informed decision have become so complex that the current tools (based on the utilitarian theoretical foundations) do not help us much.

From World meet to end digital divide starts in Geneva Saturday:

"Leaders from nearly 200 countries including 60 heads of state and government will attend the first World Summit on the Information Society (WSIS) in Geneva Saturday aimed at bridging the digital divide between the rich and poor."

"The aim of the United Nations summit is to come up with a global plan to ensure everyone's access to information and communications technologies."

Hopefully the attendants at the summit do not forget that ensuring access to information and communication technologies for everyone does NOT necessarily mean a reduction in the digital divide between rich and poor nations, countries, and peoples.

If history is any indication, we should have already learned that technology alone does not solve social problems, not necessarily, and perhaps not unless it can be shown so. For example, it would be beneficial to hear how does information technology help developing countries escape poverty. It might, if the means of production in the developing countries are improved to build self-sustainable economy based on access to information and information technology in general.

However, considering the conditions around the world at this stage, I would rather expect that activities related to building sustainable local economies (independently if they are related to information technology or no) are more important in escaping poverty. People in the developing countries can have access to all information technology they want (even this process is questionable because to achieve success with information technology one needs to first create the necessary economic conditions in order to bring the access to information technology to majority of the people) and still might not be able to escape poverty unless some sort of sustainable local economy is established to a certain degree.

Computers 'hamper the workplace' - not really, but ...

| Permalink

From Computers 'hamper the workplace':

"Computer systems at work are not working as they should, despite costing millions, a report says.
The problem lies with people rather than the systems themselves, concludes the iSociety think-tank."

Really? The problem lies with people? Who needs research to figure this out! :)

So WIPO, why did you scrap the Open Source meeting?

| Permalink

The Register asks rather the obvious question: So WIPO, why did you scrap the Open Source meeting?

"WIPO is an international organisation dedicated to promoting the use and protection of works of the human spirit. These works - intellectual property - are expanding the bounds of science and technology and enriching the world of the arts. Through its work, WIPO plays an important role in enhancing the quality and enjoyment of life and helps create real wealth for nations."

Good so far ... and then ...

"Given its background and mandate it is surprising that it scrapped its first meeting on "open and collaborative" projects such as "open source software." After all open source software does, indeed rely on intellectual property rights. It cannot exist without them. It is, therefore, bemusing that the US Director of International Relations for the US Patent and Trademark Office apparently opposed such a meeting, claiming that such a meeting would run against the mission of WIPO to promote intellectual property rights. At least one of the major US software companies, probably beginning with the letter "M", is reported to have lobbied against the holding of such a meeting."

No comments...

Project Leopard: open source for eGovernment and schools

| Permalink

Open Source Software Institute Releases Components to eGovernment Web Services Platform; Initiates Working Group for Open Government Interoperability Standards

Great development in the open source activities for eGovernment and Education. OSSI has released Phase 1 of Leopard:

"Project Leopard is a web services application framework that provides fast, efficient access and implementation of LAMP technology for eGovernment programs. Phase 1 release of Project Leopard is now available for free download and evaluation at"

The Semantic Web - hype or reality?

| Permalink | 1 Comment

I've been puzzled for some time as to what is meant by the "semantic web" phrase and what does it mean in practice and research. I've come across the following article The Semantic Web, today that appears to be describing the semantic web concept(s) in a clear and presentable way.

The article makes the following distinction:

"The key point of the semantic web is the conversion of the current structure of the web as a data storage (interpretable only by human beings, that are able to put the data into context) into a structure of information storage."

I can understand the above intention and the attempt to make a distinction between data and information. However, the distinction between data and information that we make in our heads and understanding does not mean much to computer software.

Further, the article states:

"The Semantic Web is based on two fundamental concepts: 1) The description of the meaning of the content in the Web, and 2) The automatic manipulation of these meanings."

As far as 1) is concerned, the description itself is just another data (or information), i.e. metadata (or metainformation). In any case, the proper software tools have to be build to 'understand' the metadata/metainformation.

As far as 2) is concerned and the manipulation of meanings, this is a bit skeptical because to the machines, as I've tried to explain elsewhere here and here, those descriptions are just data it can manipulate and not meanings.

No, I don't believe that metadata and metainformation will not be able to provide a level of quality in the process of information seeking and access to information, I'm just a bit skeptical about the hype and high level of optimism that the semantic web will deliver us from the chaos of the web.

An interesting parallel are the natural languages. Each language is composed of words and phrases that have certain meaning(s) and/or concepts attached to them. To be able to navigate within the conceptual space of the language (i.e. understand the language) one needs to learn what each of the words represents: because each word or phrase is a metadata/metainformation for the actual concept in the particular language. So, it is good to be optimistic that eventually we'll come around to be able to represent the vast and chaotic multitude of information on the web with a set of metadata/metainformation and ontologies that all software will 'understand'.

Well... Esperanto hasn't yet become the world language it was meant to be... And it does not seem that it will became anytime soon... And even if it does, there still will be multiple meanings for various phrases...

social software - what's in the name?

| Permalink

I've come across few various sites and some articles (blog entries, etc.) talking about social software. The phrase does sound interesting and the name (i.e. social software) appears to promise much more than what actually happens to be.

For example, in iCan for the Public the folks over at Many2Many state:

"The BBC's iCan is in public pre-beta, a social software project to foster social capital and democratic participation. I posted on M2M about the project back in May. (Just a little before that we were having the same power-law inspired discussion of weblog modalities we are today)."

After reviewing the iCan site, it appears to be a collaborative tool/portal where people from the UK can share personal opinions and learn from each other. A clear statement is made at the site that iCan can't be used for commercial purposes.

The common denominator of the tools termed 'social software' seems to be the ability to facilitate open collaboration among the publics or users of such software with the 'publishers/moderators' playing a facilitating role. According to this I would contend that a wide range of software packages that support collaboration have the potential to be used in a way that makes them 'social software'. For example, any software such as mailing lists managers, CMS/portals, blogging software, etc., fit the pattern. However, it is their use that makes them 'social software’ or not. Needless to say, those collaborative software packages that do not support open communication and sharing of ideas and thoughts can't be considered 'social software'.

Invest in open source, say the Danes

| Permalink

Invest in open source, say the Danes

"The ordinary market conditions for standard software will tend towards a very small number of suppliers or a monopoly," the report says. "It will only be possible to achieve competition in such a situation by taking political decisions that assist new market participants in entering the market."

An interesting thought. Instead of the profits going to few big software companies, various organizations share their cost in developing opens source software. The potential profits (for the software firms) are turned into savings (for the users of the software). Isn’t this enough of an incentive for various governments and corporations to ‘invest’ in open source? This mode of thought also urges corporations to compete at the true level of their values instead of competitive advantage due to being able to afford the right software.

Where did the technology come from?

Writing a critique on McLuhan’s work and ideas presents the challenge of where to start and exactly what to critique in light of the fact that McLuhan has written so widely and perhaps less coherently than the rest of his contemporaries.

In this paper I’m concentrating on few of his ideas and thoughts, namely McLuhan’s technological determinism viewpoint or lack of one thereof—considering his opinionated statement that “… all media, from the phonetic alphabet to the computer, are extensions of man that cause deep and lasting changes in him and transform his environment” (McLuhan interview, p.54, column 3), in conjunction with his statement that “the medium is also the message” (McLuhan interview, p.56, column 1), and his apparent misdiagnosis of the role of the media in the hegemonic process as described by Gitlin.

McLuhan has so much to say about various technologies and their intimate interplay with human and social senses, yet, he does not say anything about how various technologies are constructed. While McLuhan does not necessarily fit the profile of a technological determinist, he appears to be supporting the view that the human society is helpless and must, or eventually ought to succumb to the technological forces: “The computer thus holds out the promise of technologically engendered state of universal understanding and unity, a state of absorption in the logos that could knit mankind into one family and create a perpetuity of collective harmony and peace” (McLuhan, 1969, p.72). The shortcoming of this argument is that McLuhan does not address the process of technological innovation, despite the fact that this very process of innovation provides the explanation of how various technologies come to be constructed via and through the complex process of interplay of various social, human, and non-human entities in our society. The process of technological innovation is constantly in flux, including here various media and communication technologies. Therefore, the lack of the innovation and the social constructionism argument presents a shortcoming in McLuhan’s overall argument that the human society must succumb to technological forces. Media are not isolated entities that spur by and in themselves. Media technologies are invented, created, and deployed by man. Thus, there is a control factor that determines to a certain degree their use and their potential effect. Even if it can be assumed that the social forces and factors in the process of social constructionism of media technologies can totally imbed and manifest themselves through the technologies that they help create, it wouldn’t be the technology that is the instigator. Certainly, in McLuhan’s arguments this seems to be the case and this is precisely the underlying problem that I see with his argument: while media technologies can and do manifest certain socio-economic and political power structures, media technologies do not create those; media technologies merely mediate and/or reinforce the power of the social structures within which they are imbedded and utilized.

computers can save the classroom ... IF ...

| Permalink

In Why computers have not saved the classroom George Siemens comments on CSM's article which states that "Putting computers in classrooms has been almost entirely wasteful, and the rush to keep schools up-to-date with the latest technology has been largely pointless".

I agree with George's quote:

"Comment: I disagree with the statement of technology in education being pointless...but it's very interesting to watch the dramatically different discussions happening in schools. We talk in dollar amounts in regards to technology that most librarians/teachers have only dreamed of for their respective fields. Funds seem to be available for hardware/software...but not for teachers/books."

and would just like to add to this that the resources are misdirected and misused. For one, definitely there should be more funding for the appropriate training of teachers, including books and courses.

But most importantly the technology appropriated in schools should be appropriated for the very reason of enhancing students' learning and understanding capabilities. Unfortunately, technological updates and upgrades in schools are mostly ineffective because the environmental and contextual factors of the surrounding local situations are usually shelved under the rug. One of these factors is the necessity for not so traditional literacy, i.e. information, media, and computer literacy. What good are technologies if the students that are supposed to use them are not given the appropriate training about technologies’ benefits, shortcomings, limitation, impacts, etc?

A succinct definition of Actor-Network Theory

| Permalink | 1 Comment

The ISCID Encyclopedia of Science and Philosophy provides a very succinct definition of the Actor-Network Theory.

The definition emphasizes on the most important and pertinent aspects of ANT (actor-network theory) as a theory and methodology about how to describe the interplay between various elements (or actors) in networks where human and non-human elements (or nodes, or actors) are present.

In What is Actor-Network Theory: various ANT definitions I've provided few definitions taken from What is Actor-Network Theory?.

Back in July we had an interesting discussion about few ANT related concepts such as nodes, or actors, or networks with Jeremy Hunsinger. It is an interesting discussion that brings forth few different viewpoints and understandings. Other related ideas and thoughts can be found in the Actor-Network theory & methodology category.

It would be interesting to hear if anyone out there is using the actor-network theory and/or methodology in their research. I would be mostly interested about the challenges in such application.

fighting information pollution

| Permalink

Web guru fights info pollution:

"The entire ideology of information technology for the last 50 years has been that more information is better, that mass producing information is better," he [Jakob Nielsen] says.

If you are a company somehow related to the management and manipulation of information, certainly more information is better. However, this does not say much about the quality of life, and not much about the quality of information.

"The fix for information pollution is not complex, but is about taking back control your computer has over you."

This is a very profound philosophical statement; certainly not everyone believes that there is a control we have to take from the computers. Just how do we go about tacking back the control anyway? I'm not saying that this is not possible, it is just now easy due to many factors, and one of them being that not everyone believes there is a control to be taken back. As in any solution to a potential problem, one of the most important things in the process of discovering the solution is the ability to diagnose the problem properly. In the case of the information pollution, contextually diagnosing the root of the problem might turn out to be the hardest task.

technology use by East and West

| Permalink

Asia plays with hi-tech visions:

"But researchers have found big cultural differences between East and West when it comes to what people actually do with their computers and mobiles phones.
In many Asian countries, technology has become a tool for learning, religion and politics, says Intel ethnographer Genevieve Bell."

Let's just hope that the technology keeps changing further and modifying itself to the needs for learning, religion and politics; and not the other way around where the actual process of learning is modified to fit within certain technology's capabilities.

"More importantly, mobile technology has been adapted to reflect the cultural priorities of each nation, such as their religious faith.
In Malaysia you can now get mobiles that come with a built-in directional finder to help Muslims pray in the direction of Mecca."

It appears that the factor of profit will be hard to remove from the picture. But at least the product manufacturers are listening to the spiritual needs of their audiences.

"Suddenly this device that I use to keep in touch with my family and friends became a way of keeping in touch with your inner spiritual life and your God."


| Permalink

(ShelfLife, No. 127 (October 9 2003))

Quote: "Asked by a BBC interviewer whether it's a "stupid fear" to worry that the Internet will become a giant brain, World Wide Web creator Tim Berners-Lee replied: "Computers will become so powerful and there will be so many of them with so much storage that they will in fact be more powerful or as powerful as a brain and will be able to write a program which is a big brain. And I think philosophically you can argue about it and spiritually you can argue about it, and I think in fact that may be true that you can make something as powerful as the brain, really whether you can make the algorithms to make it work like a brain is something else. But that is a long way off and in fact that's not very meaningful for now at all. All I'm looking for now is just interoperability for data." (BBC News 25 Sep 2003)

IT fair hints at high-tech society of future

| Permalink

IT fair hints at high-tech society of future:

"The theme of this year's show is ``ubiquitous society,'' or a society in which wireless technology touches every aspect of life, and households are filled with networked appliances."

Do we really want all these around us? What would it mean to be human and humane if technology will be 'controlling' all aspects of our daily lives...? Will we forget how to make coffee?

Democratizing software: Open source, the hacker ethic, and beyond

"The development of computer software and hardware in closed-source, corporate environments limits the extent to which technologies can be used to empower the marginalized and oppressed. Various forms of resistance and counter-mobilization may appear, but these reactive efforts are often constrained by limitations that are embedded in the technologies by those in power. In the world of open source software development, actors have one more degree of freedom in the proactive shaping and modification of technologies, both in terms of design and use. Drawing on the work of philosopher of technology Andrew Feenberg, I argue that the open source model can act as a forceful lever for positive change in the discipline of software development. A glance at the somewhat vacuous hacker ethos, however, demonstrates that the technical community generally lacks a cohesive set of positive values necessary for challenging dominant interests. Instead, Feenberg’s commitment to "deep democratization" is offered as a guiding principle for incorporating more preferable values and goals into software development processes."

Factors of regional/national success in information society

| Permalink

Factors of regional/national success in information society developments: Information society strategies for candidate countries


"Bread or Broadband? The thirteen candidate countries (CCs) for entry into the European Union in 2004 (or beyond) confront difficult choices between "Bread or Broadband" priorities. The question raised in this article is how to put Information Society (IS) policy strategies at the service of social welfare development in these countries, while optimizing their resources and economic output.

The article summarises a dozen original research studies, conducted at the European Commission’s Institute for Prospective Technology Studies (IPTS). It identifies ICT infrastructures, infostructures and capabilities in the CCs, the economic opportunities these may offer their ICT domestic industry, and the lessons from previous IS development experience in the European Union that could possibly be transferable.

The paper concludes that only those trajectories that offer a compromise in the Bread or Broadband dilemma, taking into account both welfare and growth issues, will be politically sustainable."

This is a response to Ed's argument (re: Technology addiction makes us unwitting slaves) that: "... it is not the technology that abuses individual rights, but other people. I don't think the solution is more/different technology", as well as some clarification and addition to my original entry that Ed responded.

Let me just say that I do agree with Ed that the use of the word 'addictive' in relation to the use of technology in the original article was a real misuse. I believe they meant to say dependency on technology.

Now back to the argument that "it is not the technology that abuses individual rights". True, indeed. Technology per se by itself does not have the capability to abuse anything. It is the people who use the technology in various ways, and more then often technology is used to reinforce power and social structures.

However, in the process where technology is used to reinforce existing power structures, the technology itself is designed and modified in such a way that more than often the end result ends up being a technology that is restrictive enough by embedding in itself features, capabilities and functionalities that play well in the hands of 'other people', usually the power brokers.

An interesting example is the TV broadcasting technology. The way it has been deployed it allows only those who control it to be able to disseminate information and news. This is a one way communication, i.e. one-to-many. On the other side, the internet (at least the internet as a publishing and communication medium) by design and functionality is not centralized (though some countries are restrictive) and thus allows almost anyone to be able to distribute en mass, i.e. many-to-many communication.

The point I'm trying to make is that technologies have performative capabilities according to the features and functionalities they embody. Some are more restrictive and some more open.

Here is the train of thought:
- We create various technologies
- Those technologies have limitations and restrictions because they are built for specific purpose and under limited resources
- Sometimes a technology is used for other purposes than what was initially intended, intentionally or unintentionally
- Once a technology is used, its limitations and restrictions affect how people that use the technology do their jobs and tasks
- Due to technology's limitations, people change their ways of performing various tasks that require the use of the technology
- Thus we end up modifying the tasks themselves so they can be done with the technology available at hand

Why not modify the technology so it is not limitative and restrictive? Well, the workplace has it troubles, challenges, and timeframes. Sometimes things have to be done in less then perfect environment. In such situations the technology that is available has tremendous power of how the tasks are framed and planned. Interestingly enough, the technology was most probably designed elsewhere, and maybe not exactly for the task it is being used.

Technology addiction makes us unwitting slaves is indeed somewhat philosophical but also a practical article with very pragmatic eye openers that touches on the contemporary issues of technological determinism vs. social constructionism discourse, especially as it pertains to the role of information technology in the information society.

The last bullet/paragraph in the story states: "Technology's promise and alluring capabilities are used to surreptitiously entrap and willingly imprison members of the information-age society instead of truly empowering them."

Perhaps the open source technologies which are usually not developed with profitability (i.e. bottom line in $$$) in mind can show that technology does not have to be entrapping and imprisoning. It is exactly this that I'm trying to argue in favor of open source software as an actor in the ecology of open source supported technology that manifests itself as an antidote to the claim that thechnologies "surreptitiously entrap and willingly imprison members of the information-age society".

Quotes from the article:

"Yet as we rush to embrace the latest and greatest gadgetry or high-tech service and satisfy our techno-craving, we become further dependent on these products and their manufacturers -- so dependent that when something breaks, crashes, or is attacked, our ability to function is reduced or eliminated. Given these frequent technical and legal problems, I'm wondering if we're as free and empowered as we've been led to believe."

"To make things worse, government practically has outsourced the oversight and definition of technology-based expression and community interaction to for-profit corporations and secretive industry-specific cartels such as the Motion Picture Association of America, the Recording Industry Association of America and the Business Software Alliance. Such groups have wasted no time in rewriting the rules for how they want our information-based society to operate according to their interests, not ours."

technology as key to democracy

| Permalink

From Switzerland sees technology as key to democracy:

“It is our mission to make modern technology accessible to everybody,” Leuenberger said. “People living in developing countries can only escape poverty if they have access to information.”

Yes, technology can be an important key to democratic development. It has been often stated that technology will solve the problems of poverty and thus bring about democratic movements. While it might be true that technology has increased productivity in certain areas around the world, it is perhaps very much debatable whether it has decreased poverty in general.

If technology is to deliver democratic 'results', it must be used with that sense and for that purpose by helping the economic development that helps the improvement of the bottom line economies.

Unfortunately, the main players in bringing the information technologies to developing countries around the world are private companies who ultimately care about their bottom line (i.e. $$$); it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.

(I’ve also elaborated on these points in these previous entries: Discord at digital divide talks, is IT alone really a solution to poverty?, access to information a solution to poverty?!, Search engine for the global poor?)

Workers reject IT that controls

| Permalink

From Workers embrace IT that fosters coordination; reject IT that controls:

"Managers about to add new computer-based systems should be aware: a technology that fosters access and coordination will be embraced by workers while one that controls behavior to increase productivity will be rejected, say two Penn State researchers who studied how workers adopted IT tools such as software, cell phones and other Internet applications."

How about the IT that it presented to the workers as enabling access and coordination when indeed it is meant to control work related behaviors? I guess I'm saying that sometimes it is hard to know whether the IT is controlling or not.

"We have this production view of the world in which new software will improve workers' efficiencies and effectiveness, but new technologies don't just speed things up," said Steve Sawyer, associate professor of information sciences and technology (IST). "They can change the nature of work which can affect whether workers adopt them."

Information structures and information technologies do not develop in isolation. Similarly, the social structures in our society do not develop free from technological influence. The information technology and the social structures in our human society inform and shape each other.

"Managers and decision makers who understand how people work and how systems work are more likely to introduce technologies that will be both embraced and used productively. But systems developed out of context, with little regard for workers' preferences and implemented without considering their functional effects, won't be used to capacity. In those cases, workers' resistance leads not to an increase in efficiencies, but rather to a decrease."

Good point.... context and perceived functionality do matter in the process of IT appropriation by the employees, no matter what the developers of the IT tools might think have imbedded into it.

search engines' meaning mediation power

| Permalink

As I was attempting to identify few queries (for the TA class I assist the professor) that will result in URLs returned that would have different relevance depending on the user (needs, interests, etc...), I tried to search for the word 'syntax' (in Google) due to its multiple meanings, especially as it relates to the natural language and computer language. The idea was to show that the returned search results have different relevances depending if the search was instigated due to your interest in natural language or computer language.

The results were really surprising! The first 40 or so results were almost exclusively about the syntax of computer languages or some other system syntax. The syntax of the natural language was absent altogether!

Should we be concerned with this? I think so. It is unreal and untrue that the word 'syntax' (as an example) is related only to computers and systems. How would middle school or elementary school children react to these results when searching for the English language syntax?

I've taken the word 'syntax' as an example. There are probably many other words and phrases that search engines provide biased results for, intentionally or not.

Has the word syntax lost its meaning as it is related to natural language? At least this is what the search in Google might suggest to those that rely on learning about what they don't know via searching the web.

In this scenario, Google search results seem to be mediating the meaning of the word 'syntax' and many other words and phrases. It would be interesting to understand why google's search results are biased in favor of computer and systems related terminology, when there are tons of natural language syntax resources on the web.

Should we be concerned over search engines' meaning mediation power about things that affect us in our daily or professional lives?

KM: what's in it for me?

| Permalink

KM: what's in it for me? (via

A series of thoughts on knowledge management: KM: what's in it for me?. Worth reflecting on: "Social networking on the internet is beyond the communities of practice phenomenon, since the former is initiated and driven by the individual, and the opportunities for networking are more flexible, dynamic and fluid than communities of practice."

the "perfect design" yardstick

| Permalink

In Perfect design? Beth informs us of a book attempting to explain why there is no perfect design.

I would be interested to see what methodology the author has applied to explain the 'impossibility' of perfect design. Certainly, the actor-network methodology is a very good candidate to explain this, as it provides the necessary framework and theoretical background to explain the interplay between various actors (humans and things) and how they affect each other.

However, a simplistic explanation might be that since there is no yardstick to measure the 'perfect' of something, it is even pointless to assume perfectness. Rather, we are talking of things feasible and practical.

is IT alone really a solution to poverty?

| Permalink

From Information technology must be used to improve life in poor countries:

"12 September – Information technology should be used to improve the quality of life in developing countries, thus helping to achieve the ambitious goals set by the United Nations Millennium Summit of 2000, Secretary-General Kofi Annan said today."

"Noting that the World Summit on the Information Society is just three months away, he added: 'I hope you will all do your utmost to make it a success, by using it to spread the word about initiatives that make creative use of technology to improve the quality of life in developing countries. By so doing, you will enable others to benefit from your ideas, and to replicate them easily.'"

Lets just hope that the participants at the World Summit on the Information Society do not assume that the very presence and utilization of IT in developing countries will somehow automagically reduce poverty and help the poor.

It has been often stated that technology will solve the problems of poverty. While it might be true that technology has increased productivity in certain areas around the world, it is perhaps very much debatable whether it has decreased poverty in general.

If history is any indication, we should have already learned that technology alone does not solve social problems, not necessarily, and perhaps not unless it can be shown so. For example, it would be beneficial to hear how does information technology help developing countries escape poverty? It might, if the means of production in the developing countries are improved to build self sustainable economy based on access to information and information technology in general.

However, considering the conditions around the world at this stage, I would rather expect that activities related to building sustainable local economies (independently if they are related to information technology or no) are more important in escaping poverty. People in the developing countries can have access to all information technology they want (even this process is questionable because to achieve success with information technology one needs to first create the necessary economic conditions in order to bring the access to information technology to majority of the people) and still might not be able to escape poverty unless some sort of sustainable local economy is established to a certain degree.

Fubini's Law

| Permalink

Apparently there is a 'law' called Fubini's Law that explains how technology and society inform and influence each other. This is the first time I heard about this law. Here it is (from Column Two):

1. People initially use technology to do what they do now - but faster.
2. Then they gradually begin to use technology to do new things.
3. The new things change life-styles and work-styles.
4. The new life-styles and work-styles change society
... and eventually change technology.

Very interesting observation. But, a law? Hmm....

Models of Collaboration

| Permalink

Models of Collaboration presents, informs and suggests five models of collaboration around the contexts/situation of: Library, Solicitation, Team, Community, and Process Support.

"In this guest editorial we examine five models for collaboration that vary from barely interactive to intensely interactive. Granted the CS definition for collaboration requires some level of interaction by two or more people, and in the past we have said that reciprocal data access (such as you would find in a library or repository) is not collaboration, we have also said that technology, content and process are critical for any type of collaboration. This being the case we are expanding our definition of collaboration (slightly) to include content libraries as most of the vendors in this area have added collaborative functionality. In addition, content is often critical for a collaborate interaction to occur…" - David Coleman

(Found the link via entry)

planning, Good vs. bad code

| Permalink

Ed has made an interesting observation on Tesugen'sPeter's entry Good vs. bad code:

"The above quote just needs the one small edit I've made. Planning is a critical part of all development (i.e. determining objectives, users, requirements, design, etc), and to suggest otherwise is misleading."

And here is Tesugen'sPeter's quote referred by Ed:

"But with some apps, you just know that they are well written. Those apps speak their quality loudly. They are coherent, they have integrity, their UIs make perfect sense, they behave as you expect, and so on. Why is this a good sign of the code being clean? Because software can’t be planned. Software is always a dialogue with its users, with competing software, and with its programmers. Good software adapts, and for adaptations to take place gracefully, the code must be susceptible to changes. Bad code isn’t."

I definitely agree with Ed's observation. Planning is an important and indispensable part of any software or application development. Without proper planning, the software will perhaps be much more far away from its target functionality. Even the phrase 'target functionality' is very fluid as it almost always changes during the cycle of software development. Or, should I better say the planned functionality ought to change if one is to produce quality software.

Perhaps it is this concept of the 'moving target' always in flux that TesugenPeter has in mind when stating that "Good software adapts, and for adaptations to take place gracefully, the code must be susceptible to changes”. So, if by "Because software can’t be planned" it is meant to say that the initial plan is never modified, than it is justifiable to say that software can not be planned. Indeed, such planning that does not modify itself throughout the process of software development from inception, to functional requirements writing, to development, testing and deployment, is no plan at all because it wrongly assumes it knows all there is to know about the final software product. In most instances this is not true. To act otherwise leads to bad software functionality.
You may want to check the following entry that touches on the issues of Adaptive Structuration Theory and its relevance to software development, stressing on the issues of software features and spirit. From Adaptive Structuration and Information Use in distributed organizations:

"That the decision-making and the institutional schools are not appropriate explanation models if considered in isolation can be seen in light of technology's features and spirit. The social structure provided by any advanced information technology (AIT) can be described by its structural features (rules, resources, capabilities, etc.) and by technology's spirit as it emanates from the feature set. The spirit is the intent of the feature set in regards to values and goals, it is actually what the designers think and believe the feature set can do and how should it be used within the institutional/social structures. The spirit is in flux at the early stages of technology's development. It becomes stable as the technology matures, but by this time the technology has impacted the social structures and it has been impacted by them as well: "So, there are structures in technology, on the one hand, and structures in actions, on the other. The two are continually intertwined; there is a recursive relationship between the technology and action, each iteratively shaping the other" (Desanctis, p. 125)."

Having said the above, another statement is a bit puzzling: "But with some apps, you just know that they are well written. Those apps speak their quality loudly. They are coherent, they have integrity, their UIs make perfect sense, they behave as you expect, and so on."

The above quote insinuates some sort of a correlation between bad/clean code and software/apps behavior. From personal experience I know this is not necessarily true. A particular code can be neat and clean, and yet does not behave as expected. Of course, the emphasis here is on 'as expected'. I think that the correlation of expected behavior is stronger and more relevant with the systems (functional) requirements and design documentation.

Machine can't Think, and It still Is

| Permalink

Often we hear about or read headlines of articles claiming to report about machines that think or computers that can understand and reason. In each instance such information ought to be taken with skepticism.

One such recent article is Wired's Machine Thinks, Therefore It Is about an effort by Sandia's team:

"Over the past five years, a team led by Sandia cognitive psychologist Chris Forsythe has been working on creating intelligent machines: computers that can accurately infer intent, remember prior experiences with users, and allow users to call upon simulated experts to help them analyze problems and make decisions."

Infer intent, remember experiences.... yet, the rest of the article only reports on rules and patterns that are far from any type of thinking, reasoning, or understanding.

Nevertheless, the following quote is an attempt at the right direction, stressing that cognitive entities (such as humans) can interact intelligently because they each know something about each other or have some common/shared background that enables contextualization and understanding:

"When two humans interact, two (hopefully) cognitive entities are communicating. As cognitive entities -- thinking beings -- each has some sense of what the other knows and does not know. They may have shared past experiences that they can use to put current events in context; they might recognize each other's particular sensitivities."

So, how does one build a cognitive entity in its true sense, or perhaps approximate cognitive entity? Is it appropriate to even call a machine a cognitive entity by attaching the same connotation of the cognitivity as it pertains to humans?

I've raised similar issues in a previous entry why machines can't reason or think. The reason why the efforts of AI (artificial intelligence) so far have proven unsatisfactory in emulating the human reasoning and thinking process might have to do with the very fact that so far the approaches have been only mechanistic, thus incompatible with the very nature of the human experience and with the human mind in particular. So, we want computers to think intelligently, reason, learn, and think, and yet we apply mechanistic approaches to attempt to achieve these functions which require intellect?

the singularity paradox - machine intelligence

the singularity paradox - machine intelligence

| Permalink | 1 Comment

I just came across an article regarding the concept of singularity as it pertains to society and technology. The article (Exploring the 'Singularity') goes to lengthy details to explain the concept of singularity, what it means, and sort of why is it 'inevitable'.

The predominant framework of the article relies on the belief that there is (or there will be) such existential state, a tangible reality, where machine intelligence is a possibility. This 'understanding' then leads to the belief that technology 'has life of its own'.

The article provides the following brief and succinct definition(s):

"Kurzweil and many transhumanists define it as "a future time when societal, scientific, and economic change is so fast we cannot even imagine what will happen from our present perspective." "

that will result when the machine intelligence surpasses the human intelligence.

"A number of scientists believe machine intelligence will surpass human intelligence within a few decades, leading to what's come to be called the Singularity."

As I have elaborated in another entry (why machines can't reason or think) the word 'intelligence' has two distinct meanings when applied to humans and to machines. Our intelligibility is a reality that we experience, we feel it, and we manifest intelligent actions. Now, if there is to be a machine intelligence in its true meaning, it is us humans that will have to implement it or as will singularists say 'turn the switch' to that machine intelligibility.

Short of using the argument that how can an intelligence create intelligent form more intelligent than itself, we should not forget that machines will definitely become more powerful and more capable in their information processing functions. But, are we ready to call this intelligence? Furthermore, the fast pace of technological development and advance will certainly come. However, how is this related to machine intelligence as stated in this article?

If anything, singularly should better refer to the future time when our human activities are fundamentally dependant and conditioned by the technology that surrounds us. We saw this sort of mass behavior with the Y2K bug. It had nothing to do with machine intelligence. Actually, one can argue that it very much had to do with human mis-intelligence in depending so much on technology even for the most critical daily life necessities.

Maybe it is time to start thinking on how to better utilize the technology around us, or perhaps to how better design the technology that will surrounds us, in such a way that minimizes the possibility of chaos due to overdependence on information technology.

Technology is what we make it. Yes, the appropriation of technologies influences us, our human society, and our activities. This influence, inscribed into the technology by us the humans, might prove to be negative and appear controlling at some instances, maybe with devastating consequences and relative chaos. However, this should not be confused with machine intelligence. We didn't create our intelligence. How can we create machine intelligence (or artifical intelligence) at all, let alone intelligence more intelligent than our own as the concept of singularity suggest?

From First Monday
The Augmented Social Network: Building identity and trust into the next-generation Internet

"This paper proposes the creation of an Augmented Social Network (ASN) that would build identity and trust into the architecture of the Internet, in the public interest, in order to facilitate introductions between people who share affinities or complementary capabilities across social networks. The ASN has three main objectives: 1) To create an Internet-wide system that enables more efficient and effective knowledge sharing between people across institutional, geographic, and social boundaries; 2) To establish a form of persistent online identity that supports the public commons and the values of civil society; and, 3) To enhance the ability of citizens to form relationships and self-organize around shared interests in communities of practice in order to better engage in the process of democratic governance. In effect, the ASN proposes a form of "online citizenship" for the Information Age."

Certainly an interesting concept. Perhaps this is one step towards the publishing of research material free from comercial publishers.

why machines can't reason or think

| Permalink | 1 TrackBack

In Helping Machines Think Different, Noah Shachtman at Wired News reports on the LifeLog project led by Ron Brachman :

""Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks. These systems will need to reason, learn and respond intelligently to things they've never encountered before," said Ron Brachman, the recently installed chief of Darpa's Information Processing Technology Office, or IPTO."

An example of what IPTO/PAL might do:

"If people keep missing conferences during rush hour, PAL should learn to schedule meetings when traffic isn't as thick. If PAL's boss keeps sending angry notes to spammers, the software secretary eventually should just start flaming on its own."

and, this is supposed to be achieved through a proposed technique(s) called REAL-WORLD REASONING based on these three concepts: 1) High-performance reasoning techniques, 2) Expanding the breadth of reasoning and hybrid methods, and 3) Embedded reasoners for active knowledge bases.

Now, in any dictionary, the word 'reason' has to do with mental states, analytic thought, logical deductions and inductions, etc, all of which come around to depend on the thinking process, which is a mental state that ultimately has to do with the human mind. If we are to agree that the human mind is a manifestation of the electro-mechanical-biological human brain, than the approach of rules and logical entities interconnected amongst themselves might some day bring about a machine that might 'act' as the human mind.

However, what is most interesting, it does not seem that there have been any attempts to look at the human process of 'reasoning' and 'thinking' from an angle different than the electro-mechanical-biological viewpoint. A brief reading of the REAL-WORLD REASONING proposal does not reveal any new insights except that it proposes another way based on the information-processing understanding of the information where bits of information are manipulated using relevance judgment for ‘aboutness’ assessment. Perhaps the issue or relevance as used over the past few decades needs to be reassessed?

So, what is uniquely different with the REAL-WORLD REASONING proposal?

The reason why the efforts of AI (artificial intelligence) so far have proven unsatisfactory in emulating the human reasoning and thinking process might have to do with the very fact that so far the approaches have been only mechanistic, thus incompatible with the very nature of the human experience and with the human mind in particular. So, we want computers to think intelligently, reason, learn, and think, and yet we apply mechanistic approaches to attempt to achieve these functions which require intellect?

It would be nice to hear if anyone has come around to know of an effort, practical or theoretical, that attacks the issues of machine 'thinking' and 'reasoning' from a perspective fundamentally different than the information-as-thing understanding (i.e mechanistic). Anyone?

media technologies for open communication

| Permalink

While I agree in principle with Fiske in rejecting the technological determinism point of view, I also believe that due to the social construction of communication technologies there ought to be some characteristics of particular technologies that are better fit to serve the designer. My argument is that if a particular technology was designed to serve the corporate interest, most of its features will be driven to maximize the profits. [see the entry on adaptive structuration for this argument]

In contrast, if a group of people is about to design technology for open communication and democratic access to information, the technology in question will have such features as to enable easy of access to information and make it hard for that technology to be used for restriction purposes. But again, it isn’t the technology per se; it is the social structures that tilt technology use for particular purposes.

Unfortunately, most of the communication technology in use today has been built and appropriated for profit making activities. Example: cable could have been made interactive, but it wasn’t. The Internet and many of its communication tools exhibit characteristics of open communication. However, even here the corporate power has entered the arena attempting to strangulate the open communication characteristics by controlling the access…

Fiske, J. (1996). Media matters: Race and Gender in U.S. Politics. Minneapolis: University of Minnesota Press

The open source Internet as a possible antidote to corporate media hegemony

on the social dimensions of information technology

| Permalink

From Social Dimensions of Information Technology: Issues for the New Millennium by G. David Garson, ed. North Carolina State University, to be published by Idea Group Publishers in late fall 2000 (

"In a related essay on "Human Capital Issues and Information Technology, " Byron L. Davis and Edward L. Kick, using educational institutions as a case in point, discuss how several "mega-forces" impact institutional functioning. They note that sociologists have long cautioned against the sort of rapid technological changes that outstrip human ability to successfully adapt to them. "Cultural lag" is in some measure inevitable, they conclude, but when social change is drastic, the consequences for the human condition, as well as human capital, can be pernicious in the extreme."

"Finally, in an important article titled "International Network for Integrated Social Science," William Sims Bainbridge, a sociologist and Science Advisor to the Directorate for Social, Behavioral and Economic Sciences of the National Science Foundation, discusses how computer-related developments across the social sciences are converging on an entirely new kind of infrastructure that integrates across methodologies, disciplines, and nations. This article examines the potential outlined by a number of conference reports, special grant competitions, and recent research awards supported by the National Science Foundation. Together, these sources describe an Internet-based network of collaboratories combining survey, experimental, and geographic methodologies to serve research and education in all of the social sciences, providing an unprecedented collection of resources available to social scientists on an international basis."

technologies for Free Speech

| Permalink

From Hacking for Free Speech:

"The free exchange of information over the Internet has proven to be a threat to the social and political control that repressive governments covet. But rather than ban the Internet (and lose valuable business opportunities), most repressive governments seek to limit their citizens' access to it instead."

"To do so, they use specialized computer hardware and software to create firewalls. These firewalls prevent citizens from accessing Web pages - or transmitting emails or files - that contain information of which their government disapproves."

"Hacktivism's approaches raise a number of interesting questions. Can hacktivism really work? That is, can a technology successfully complement, supplant, or even defy the law to operate either as a source of enhanced freedom (or, for that matter, social control)? On balance, will technological innovation aid or hinder Net censorship?"

In response to the 3rd quote from above, whether the technology can “successfully complement, supplant, or even defy the law to operate either as a source of enhanced freedom (or, for that matter, social control)”, the appropriate framework needs to be applied. From the technological determinism point of view it is apparent that the technology does exhibit characteristics that would make it as a source of enhanced freedom or as a tool for a social control. Which in turns leads us to social constructionism to understand how these technologies are constructed in the first place, and why have they acquired the attributes and the properties they have?

Certainly, the appropriate framework cannot be exclusively social constructionism or technological determinism. It has to be a mixture of both as information technology does not exists in isolation—it has been created as a result of the social structures that initiated it (for a purpose) and it has been embedded afterwards. However, once the information technology becomes part of the social ecosystem (this is an iterative process in itself), depending on its properties (whether they are restrictive or exhibit characteristics of open communication and free exchange of ideas) it will project is properties onto the structures within which it is embedded.

Thus, one might see the open source technology as instigator of open communication and exchange of open content, precisely for the reason that it has been build with such attributes and properties.

It is not hard to see that a technology which does not provide the functionality for its end users to freely communication among themselves cannot be used “as a source for enhanced freedom” (i.e. TV as a one way communication tech). In turn, the open source internet manifests itself in many ways that lets the users communicate amongst themselves without control from a third party. Perhaps this positions the open source Internet as a possible antidote to corporate media hegemony.

computers can't understand

| Permalink

In Making Computers Understand Leslie Walker reports on an apparent innovation/invention suggesting that computers can understand and be aware of context. While the phraseology chosen might be a journalistic lingua franca to ‘spice’ the article, nevertheless, some claims by the company are rather troublesome:

“Abir, 46, claims to have unlocked the mystery of "context" in human language with a series of algorithms that enable computers to decipher the meaning of sentences -- a puzzle that has stumped scientists for decades.”
"This man literally has figured out the way the brain learns things," Klein said. "On a theoretical level, his insight basically is this: Understanding a concept is nothing more than viewing a concept from two different perspectives."

The very title of the article "Making Computers Understand" makes you immediately skeptical. Especially troublesome is the above quote stating that “This man literally has figured out the way the brain learns things”? Isn’t it perhaps premature to claim that we have discovered how the brain works with such certainty when the history has told us that many such claims in the past have been proven wrong by later discoveries and innovations?

Further, how does one prove that two different perspectives are sufficient to understanding a concept? I hope this does not mean that they believe two perspectives are necessary since there are ‘two sides of the same story’. Usually there are more then two sides to the same story and understanding the ‘reality’ and its context probably might take much more than two perspectives.

Besides, computers can’t decipher the meaning of a sentence as claimed in the article…

the digital divide: more than a technological issue

| Permalink

Information On-Ramp Crosses a Digital Divide

"For years, community activists and politicians around the country have talked about the need to help people who have been left behind in the digital revolution because of poverty, disabilities or fear of new technology. Without computer literacy, the argument goes, disadvantaged groups will become more excluded in the high-tech economy. Yet many efforts have meant little more than making it possible for people to surf the Web from a library terminal."
"It [WinstonNet] will allow any resident with a library card to have an e-mail account; transact business with the city, like payment of parking tickets; and store homework or other documents on a central server so they can be easily retrieved from any site on the network.

Well intentioned project with the attempt to narrow the digital divide gap. However, as in many other similar project, the most important aspect is not addresses and thought of: Just how does the technology by itself fit within the relevant social structures and fix the underlying social problems that have resulted in the digital dive?

Don't get me wrong, technology can be a great tool, but, it must be well planned to result in positive outcomes for the desired groups. Otherwise, it might just reinforce the existing social structures without any remedy to the digital divide.

W3: The Technology & Society Domain

| Permalink

From The Technology & Society Domain:

"Working at the intersection of Web technology and public policy, the Technology and Society Domain's goal is to augment existing Web infrastructure with building blocks that assist in addressing critical public policy issues affecting the Web.

Technical building blocks available across the Web are a necessary, though not by themselves sufficient to ensure that the Web is able to respond to fundamental public policy challenges such as privacy, security, and intellectual property questions. Policy-aware Web technology is essential in order to help users preserve control over complex policy choices in the global, trans-jurisdictional legal environment of the Web. At the same time, technology design alone cannot and should not be offered as substitutes for basic public policy decisions that must be made in the relevant political fora around the world."

book review: 'Our Own Devices': Smothered by Invention

| Permalink

David Pogue reviews Edward Tenner's book 'Our Own Devices' emphasizing on the apparent fact that our own technological inventions are changing the way we live.

It would be an interesting book to read.

nodes, or actors, or networks

| Permalink | 7 Comments

This is a response to jeremy's comments on actor construction? and a response entry (June 30, 2003) in his blog regarding the relationship of actors and networks as used/presented by the actor-network theory and methodology.

Jeremy: "i replied to this on his blog too, but ultimately my position is to rid oneself of the heirarchy of ontology involved in differentiating actors, and just look at the networks. there really are no actors, because then there is no differences amongst actors, only nodes where networks conjoin.

keeping in mind though that this is just my interpretation of several texts, mainly latour, law, then adding some norbert wiener. most people really want to differentiate between actors, I'm unconvinced that it is as important as kant tells us."

If the nodes are where the networks conjoin, than it might be this that many term an actor. Anyways, what is a network then? The following definition is one of many provided by the American Heritage Dictionary about a network: “An extended group of people with similar interests or concerns who interact and remain in informal contact for mutual assistance or support”. In this definition (and other definitions related to computer systems/networks) two distinct entities are identifiable around the concept of interaction: the channel of communication and the elements that enact these channels.

So, a network by itself is a complex element (or entity) composed of links and the elements that enact these links. Some may call these elements actors, others may call them nodes.

As far as semantics is concerns, we could be talking only of networks (at different levels due to their complexity and their relation to their surroundings) or only of actors (and we will have to differentiate between different actors and their levels). Included in here will be the channels of interactions (or the links) as complex actors or as complex networks.

Nevertheless, it appears that for such mode of explanation a distinction needs to be made between the entities and the process of communication that links those entities.

If nodes are to be taken only as passive entities where the links (or networks) conjoin, without the potentiality to act, it would seems that the nodes are only constructs with acquired properties and attributes resulting from their relative position in the network or networks. This perhaps is so for non-human entities. However, it is more then evident that humans as nodes in a network or networks are not passive even though some of the properties and/or attributes of the human node might be acquired as a result of the position in the relevant network(s). In addition, non-human nodes also contain intrinsic (relatively speaking) properties and attributes that are beyond the constructability of the relative network(s). Through these relative intrinsic properties (acquired from other outside networks) non-human actors (or nodes) are able to affect the ‘constructions’ of network(s).

defining the ingredients of actor-network and open-content open-communication

actor construction?

| Permalink | 1 Comment

In too many topics, too little time of June 29, 2003, regarding the role of the actor in the actor-network theory and methodology, jeremy writes:

"however, the fixation on the actor is still present. get rid of it, stop thinking about it, think about networks, only networks, and then think about how it constructs the actor, then i think you have a theoretically interesting actor-network theory."

To take the actor-network theory to explain the construction of the actor only would provide a one sided elaboration and perhaps incomplete picture of the relationship between the actors and the network. It is true that a particular network can be treated as an actor (a complex one). However, in the actor-network discourse it is understood that a set of actors interconnected amongst themselves through their links create a network (or a topology). Needless to say, an actor can be part of many networks/topologies at the same time, manifesting itself differently within a particular network.

While networks do have a major role in the process of actor construction, it is also true that actors play a decisive role in the construction of the networks that they are part off, and must be taken into consideration. It is obvious that human actors are not solely constructions by the pertinent networks. Human actors do have intrinsic properties that are not constructible and changeable by the networks/topologies.

This is little bit trickier for non-human actors and it can be claimed that all non-human actors related to information technology (IT) are constructions since they are man made. True, however, we should not forget that information technology actors are mostly used by those who had no say in their construction. Thus, when IT actors are used in networks/topologies other then those that constructed them, they influence and change those networks within which they are imbedded and used.

Even the process of the IT actor construction is not purely one way (i.e. networks construct the actor). In the process of actor construction networks change and are modified along the way (some due to the actors) to finally construct an actor that almost always is different than what was originally though at the beginning of the construction process.

So, yes, actors are constructed, but they also construct the networks. It is an iterative process.

Defining the ingredients of actor-network and open-content open-communication

Information Relevance

| Permalink | 1 TrackBack

The most pervasive response to the question “what is it [something] about” in relation to an information object is the answer referring to the pertinent topic or theme as perceived by the individual who is responding. Very rarely the response would be answering the question about the methodology or the framework within which the information object was created. In case of a textual document a response could possibly refer to the methodology, but in most instances perhaps because methodology is the topical issue being covered in the document. Even when the response is regarding the topicality, it is hard to agree on the aboutness of a particular document with great certainty. Nevertheless, in communication with each other, humans intuitively understand and agree on what things are about and what do they relate to. The intuitive understanding of relevance by everyone seems to be closely related to the definition in many dictionaries as “…pertaining to the matter at hand” which people use without much thinking about it (Saracevic, 1996, p.3).

Information and time relevance/aboutness

| Permalink

Considering the necessity to search for information and the potential resources that can satisfy the necessity, and its aboutness, Mizzaro suggests that “each relevance can be seen as a point in a fourth-dimensional space, the values of each of the four dimensions being: (i) Surrogate, document, information; (ii) query, request, information need, problem; (iii) topic, task, context, and each combination of them; and (iv) the various time instants from the arising of the problem until its solution” (Mizzaro, p, 812).

The dimension of aboutness (task, topic, and context) is rather incomplete in a sense that aboutness in relation to time could have been included, in addition to including time as the fourth dimension. The difference between time as a fourth dimension and time related to aboutness, is that time aboutness would give us relevance related to the passage of time.

For example, a document might be less relevant today in a certain organizational context compared to the earlier relevance it might have had, resulting from the fact that other documents appearing latter have superceded it; something like the induced difference in relevance judgments as a result of two points in time, and the additional difference in relevance when these two points in time are moved together to another time.

This could be considered different than the fourth dimension where the relation between the need for information, the resource to satisfy the need and its aboutness all three change in the way they are related at different points in time. One could argue however that the time aboutness is part of the context.

Nevertheless, I think time aboutness should be treated separately as is the task, the topics, and the context.

Information Relevance

Mizzaro, S. (1997). Relevance: the whole history. Journal of the American Society for Information Science, 48 (9), 810-832

statements, reports, and measures for KM

| Permalink

What are the challenges in the production and dissemination of IC (intellectual capital) statements and measures?

In identifying these challenges we need to perhaps look at few things:
a) what type of intellectual capital and knowledge are these statements and measures representing and meta-representing,
b) what is the intended use,
c) the role of dissemination channels and media type,
d) the role of the context.

The desired result would be to design meaningful and understandable intellectual capital statements and measures in a way that they would represents and transfer the most out of the ‘intangible world’ and into the ‘tangible world’, moderated by the context and the situation as well as the available channels and modes of dissemination.

The representation aspect of such statements is clearly emphasized by McInerney: “Although most information managers are not trained as journalists, a reporter’s skills of capturing, recording, and reporting new knowledge could be beneficial in the active process of finding out what an organization’s members know” (p. 1016).

One could argue that the representation stage is unnecessary in the case when spoken language is use to transfer ‘knowledge’. Even the spoken language though is form of representation of the intangible (short lived unless it is audio recorded or transcribed) and we clearly attempt to use the most appropriate words for representing concepts when sharing our thoughts with others.

McInerney, C. (2002). Knowledge Management and the Dynamic Nature of Knowledge. Journal of the American Society for Information Science and Technology, 53, (12) 1009-1018

“You can’t manage what you don’t know about” (Blair, p. 1027)

“Knowledge management is not an end in itself, it is a means to a further end” (Blair, p. 1028)

One of the most important aspects regarding knowledge management (KM), both as theoretical endeavor and practice, appears to pertain to the question what is it that is being managed. Or, we can better ask ourselves as to what do various authors mean when referring to KM. What’s in the name? In order to differentiate KM from information and data management it needs to be shown that knowledge is different than data and information. Blair’s (2002) explication that knowledge is different than data and information is based on the information theory stratification which puts data as the raw thing, then information which means data arranged in a certain way that presents and brings forth an obvious interpretable meaning, and then knowledge as the next level up, mainly stating that knowledge, exhibited through it characteristics, is different because it resides in peoples minds and it is not tangible (p. 1020). McInerney (2002) also presents the information theory viewpoint of knowledge: “in information theory, knowledge has been distinguished by its place on a hierarchical ladder that locates data on the bottom rung, the next belonging to information, then knowledge, and finally wisdom at the top” (p. 1010). It appears that this kind of placement of knowledge fits better with KM as practice since it distinguishes information-as-thing to be something tangible. If however we look as Brookes’s (1980) elaborations regarding ‘information’, he defines information as a "small bit of knowledge” and “knowledge as a structure of concepts linked by their relationship and information as a small part of such structure” (p. 131). There does not seem to be a necessity to explain why information is different than knowledge, for both Blair and McInerney could have proceeded with their arguments in the articles by showing that knowledge is not a tangible (in physical sense) thing. An argument for the necessity to differentiate knowledge from information in such terms appears to respond to a need to clearly and unambiguously distinguish knowledge management from information and document management (Blair, p. 1019), perhaps more so for KM practitioners.

What is Actor-Network Theory: various ANT definitions

| Permalink

The possibility of applying the actor-network theory and its methodology to different disciplines and fields of study is evident by the many senses in which it has been used.

The What is Actor-Network Theory? site provides various definitions.

These and many other colors and flavors of ANT represent a very divers scope of its usage and applicability. Here are two definitions that are particularly interesting:

"from Michael Callon
ANT is based on no stable theory of the actor; in other words, it assumes the radical indeterminacy of the actor. For example, neither the actor's size nor its psychological make-up nor the motivations behind its actions are predetermined. In this respect ANT is a break from the more orthodox currents of social science. This hypothesis (which Brown and Lee equate to political ultra-liberalism) has, as we well know, opened the social sciences to non-humans."

"from Bernd Frohmann
ANT's rich methodology embraces scientific realism, social constructivism, and discourse analysis in its central concept of hybrids, or "quasi-objects", that are simultaneously real, social, and discursive. Developed as an analysis of scientific and technological artifacts, ANT's theoretical richness derives from its refusal to reduce explanations to either natural, social, or discursive categories while recognizing the significance of each (see, e.g. Latour 1993, 91). Following the work of Hughes, ANT insists that "the stability and form of artifacts should be seen as a function of the interaction of heterogeneous elements as these are shaped and assimilated into a network" (Law 1990, 113)."

If you visit the site (What is Actor-Network Theory?) you may find other definitions pertinent to your field of study.

Related readings:
Actor-Network Theory

Actor-Network Theory and Managing Knowledge

contextual 'reading' of information objects: do we know how?

| Permalink

With respect to the Ranganathan's second law, "EVERY PERSON HIS OR HER BOOK” (OR BOOKS ARE FOR ALL) (p.81), a comparable enunciation would be EVERY PERSON/USER HIS OR HER DIGITAL INFORMATION OBJECT (OR DIGITAL INFORMATION OBJECTS ARE FOR ALL). Obviously, in the context of the digital library, this enunciation has far reaching consequences and implications in terms of legal issue such as copyrights, ownerships, freedom of speech, information democracy, etc.

However, an interesting implication is related to the aspect of information literacy or even better said digital information literacy. Given the multitude of digital information objects, even if it is possible and feasible to make available all digital information objects to all users (the obvious hard issue of relevance both research and practice related), it is hard to say whether the users will be able to ‘read’ and ‘understand’ the various digital information objects. We are all familiar how to read text as narrative. However, does every user know how to contextually read a chart, a bar graph, or a video presentation of unknown phenomena?

It appears that the information and medial literacy issues are lacking in the study of digital libraries. Marchionini indirectly raised the issue of technology vs. user in context: “The experience of this case [The Baltimore Learning Company] demonstrated that advanced technical solutions and high-quality content are not sufficient to initiate or sustain community in settings where day-to-day practice is strongly determined by personal, social and political constrains” (p.23).

Technology alone can’t fix problems.

Marchionini, G., Plaisant, C., & Komlodi, A. (in press) The people in digital libraries: Multifaceted approaches to assessing needs and impact. Chapter in Bishop, A. Buttenfield, B. & VanHouse, N. (Eds.) Digital library use: Social practice in design and evaluation. Retrieved October 26th, 2002 from:

Ranganathan, S. R. (1957). The five laws of library science. London: Blunt and Sons, Ltd. pp. 11-31, 80-87, 258-263, 287-291, 326-329

Digital Libraries and the Information Society

| Permalink | 1 TrackBack

“Human-centered digital library design is particularly challenging because human information behavior is a complex and highly context dependent, and the digital library concept and technologies are rapidly changing” (Marchionini et al., p.1)

Digital libraries like many other unique conceptual and practical phenomena resulting from the information explosion have presented both the researchers and the practitioners alike with a challenge to understand its very complex and multifaceted nature. As with any emerging concept and practice, there is a struggle to define its scope and its contextual situatedness. All three articles in one way or another deal with the definition and the meaning of the term ‘digital library’, the social relevance, and its place in the information society amid the multitude of contexts it is imbedded, and its implication for research and practice.

information science: a science in making?

| Permalink

A general observation is that information science is science in making, not yet fully established as a ‘normal science’ in Kuhnian sense: “’normal science’ means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice” (Kuhn, 1970, p.108).

Also, various information problems treated by information science lack a coherent paradigmatic understanding and definition of the information phenomenon: “in the absence of paradigm or some candidate for paradigm, all of the facts that could possibly pertain to the development of a given science are likely to seem equally relevant” (Kuhn, p. 113). As such, the multitude of information problems are addressed by variety of methodologies, conceptually viewpoint, and some theories borrowed by information science practitioners from other social and natural sciences with which information science has interdisciplinary relations.

Kuhn, T. (1970). Chapter 2: The Route to Normal Science, Structure of Scientific Revolutions 2/E, 2(2) 10-22. University of Chicago Press


This study analyses, compares and critiques a set of articles and writings that have treated and examined the ‘information’ phenomenon and the way various discourses and understandings of ‘information’ have been utilized in the field of information science and information studies. The methodological and theoretical foundations of the various understandings are discussed, not forgetting the effect of the context within which the various concepts and understanding of the ‘information’ phenomenon came into existence and use. In addition, an attempt is made to understand and trace the impact of the various understandings and concepts in their subsequent use within various practical and theoretical studies in information science, information studies, communication technology, and new media, as well as the role of information science and information related practices on the development of the understandings of ‘information’. The examination of the relevant literature shows a two sided aspect in the development of the information concept and information related disciplines (the science and the practice) as constantly informing each other over time: the understandings of ‘information’ posit questions to be answered by information science research and practice and visa versa, the information practice posits and instigates a need to properly understand information.

objective knowledge: its degree of permanence

| Permalink

The following quote by Brookes presents a great challenge: “In other words, once human knowledge has been recorded [in World III], it attains a degree of permanence, an objectivity, an accessibility which is denied to the subjective knowledge of individual humans” (Brookes, p. 128). Most intriguing is about this statement is that knowledge, once recorder, attains a degree of permanence, objectiveness, and accessibility. Not quite sure if Brookes meant to say relative permanence, objectivity and accessibility bound by time and space. Otherwise, it would suggest that the recorded knowledge and information have an intrinsic property or characteristics or structures which can be detached and maintained in truly objective manner outside of the situation and the context it was created. If so, understanding these characteristic, structures, properties and manifestation could be the first steps towards the theory of information.

Brookes, B.C. (1980). The foundation of information science. Part I. Philosophical aspects. Journal of Information Science, 2, 125-133

In everyday life, the word ‘information’ is closely associated with the concept of communication, more specifically with the aspect of communication of ideas, thoughts, and knowledge, bringing forth an understanding of information that it has properties to convey ideas, thoughts, concepts and knowledge. But, how exactly is information conveyed? If information is conveyable, is it the process that helps convey understanding between two human beings, or is information the knowledge conveyed between two cognitive entities? These questions bring forth different understanding of the word information, as Machlup and Mansfield (1983) have succinctly capture it in the above quote, suggesting that information is not a thing that is simple to describe and explain. It is a phenomenon with multifaceted understanding, perhaps requiring multitude of methodologies and means of investigation and research. Buckland (1991) identifies three principal uses of the word information: 1) information-as-process (the ability to inform), 2) information-as-knowledge (the knowledge imparted in the process of being informed), and 3) information-as-thing (p.3), concentrating on the various properties of information and its different manifestations and understandings.

Machlup, F. & Mansfield, U. (1983). Cultural Diversity in Studies of Information. In F. Machlup and U. Mansfield (Eds.), The Study of Information. Wiley, 3-59

Buckland, M. (1991). Information and Information Systems. Chapters 1, 4, 5 & 6. New York: Preaeger

In attempt to identify what information science ought to do, Brookes recognizes that “documents and knowledge are not identical entities” (p. 127), and differentiates between practical and theoretical information science: “the practical work of library and information scientists can now be said to collect and organize for use the records of World 3. And the theoretical task is to study the interactions between Worlds 2 and 3, to describe them and explain them if they can and so to help in organizing knowledge rather than documents for more effective use” (p. 128-9)

World 1 = the physical world
World 2 = the world of subjective mental states occupied by our thoughts and mental images
World 3 = the world of objective knowledge which is the totality of all human thought embodied in human artifacts, as in documents of course, but also in music, the arts, the technologies

Brookes, B.C. (1980). The foundation of information science. Part I. Philosophical aspects. Journal of Information Science 2, 125-133

why actor-network?
"In Social constructionism vs. technological determinism it has been suggested that the actor-network theory and its methodological framework may provide the language and the mode of explanation to elaborate in a common framework the interplay between human and non-human entities.
Most importantly, the major contribution of the actor-network theory seems to be the fact that it treats the human and non-human elements (or actors as the various element in a given topology are named in the actor-network language) alike as being able to influence each other."

"So, how do the actors in a particular topology influence each other? This is done through their links. The actor-network theory suggests that a process of translation takes place, a process that explains how and why some actors take the attributes and properties of the actors they are connected too. Thus, certain properties of one actor are transferred to other actors through their mutual links. The question arises then as to what/which properties and attributes of an actor can be transferred onto another and initiate a process of translation onto the actor it is connected too? Further, what is the role of the properties and attributes of the links in the process of translation/transfer? Which properties and attributes of the links are important to this process?"

"...the modifiable content depending on the intrinsic and external properties can be described and manifests itself in various degrees of openness. Similarly, the communication links vary in degree of their communicative properties via which the properties and the attributes of the actors are transferred and translated into other actors via inscription."

properties and attributes: links, actors, topologies
"The translation process enables an actor/entity (simple or complex) to inscribe its properties and attributes onto other actors in the pertinent topologies. This suggests that there is a movement of some sort from one actor to another. Certainly, in any given topology not all actors are able to inscribe their properties and attributes equality into other actors. Some properties and attributes are more prevalent in any given topology. What determines the strength of the attributes and the properties?"

Social constructionism vs. technological determinism
"For example, if one is to research the usability of collaboration tools in an organizational settings, the social constructionism for most part takes the view that the information and communication technologies are just tools to be used by the employees to perform their assigned tasks and that these tools do not effect the employees or the relevant social structures. On the other side, technical determinist consider the affect that these tools will have on the employees and the surrounding organizational structures resulting from their use."

the book as an agency for social change

| Permalink

It is the confrontation between the book merchants—who saw the prohibition of heretic books as threat to their business (Febure et al, p.304)—and reformers on one side, and the authorities of religious, political, social and social institutions on the other, we see reflected by Mill (1921). Mill suggests that it is the confrontation of one’s opinions and ideas via open and free discussion, free from governmental oversight and censorship that leads to the advancements and progress in human society. In the case of the book, it is the ‘heretic’ book itself—the other opinion, which through a very long struggle brought about the opinion to be freely expressed, free from censorship. Fabure & Martin (1976) suggest that it is this exchange of diverse opinions via the books and other printed material, rather tragic in many instances in various periods in human history, that helped and fueled the ‘coming of the book’ as an artifact of daily life (p.108).

Whatever the ways in which the book was used and by whom, analogous to other technological advance in the human history which have been use to benefit the human society as well as wrack havoc, an undeniable benefit will be permanently associated with the printed book: its ability to keep records of information and representations of human knowledge, making them available through space and time, thus acting at distance as an artifact for social change. This is book’s double role as a statement/representation of social and individual knowledge, and as an actor or agency acting upon the same.

Fabure, L. and Martin, H.A. (1976). Book as a Force for Change. In the Coming of the Book, N.L.B., 248-332

Mill, John Stuart. (1921). On Liberty. Atlantic Monthly Press, 59-111

In their presentation of historical accounts around and about the book right after the printed press become feasible for mass use, Fabure and Martin (1976) argue that business decisions about profitability played crucial role for spreading the book and making it widely used—speaking in relative terms. A point not explicitly raised and elaborated in this particular chapter, however, leads to a need for explication that profit-making ventures could not have been solely responsible for the dramatic change that took place in the wide acceptance of the book. A favorable interplay of social, political, and cultural factors was a necessary ingredient for merchants of the book to be successful in their ventures. One could argue that this favorable atmosphere came about because of necessary historical forces in line with the concept of the progressive human evolution, where merchants and profit-minded people sized the opportunity to enrich themselves utilizing this new phenomenon. In this short paper, I argue that the merchants of the book, together with reformists like Luther and Calvin played a crucial role in bringing the book to the masses. On one side, merchants saw profitability with the increased readership. On the other, Luther and Calvin envisioned the book (or any printed material for that matter) as an agent for social and political change. Various kings, monarchs, noblemen, religious authorities, and religious institutions that had no interest in changing the social and political structures of their dominions, jumped the bandwagon little late after having realized the powerful tool Luther and Calvin had at their disposal.

In the beginning phases of the ‘coming of the book’ where it slowly started to become an item in the daily lives of those who had access to it (those who could afford it and who could read), a functional analogy could be drawn with Ranganathan’s Second Law of Library Science, Every readers his or her book (Ranganathan, p. 81). The merchants did not just print any books. They made tremendous attempt to print the books that they thought would be in demand so they can profit. At that time, only religious books and pamphlets used by the clergy were in high demand.

Having defined Structuration as the "process by which social structures (whatever their source) are produced and reproduced in social life" (p. 128), Desanctis presents the Adaptive Structuration Theory (AST) as mechanism to examine the change process in a given organization by looking at the type of structures provided by advanced technologies (inherent structures), and the structures that actually emerge in human actions as people interact with these technologies (Desanctis, p. 121). The AST appears to be an appropriate and a natural fit for analyzing the utilization and appropriation of new technologies in social environments. Desanctis develops AST in relation to information technology, stating that "AST provides a model that describes the interplay between advanced information technology, social structures, and human interactions." (p. 125). However, the theory can assert itself in a broader scope, as it lays down some interesting propositions that could be applied to other technologies, perhaps safely extending its scope to innovations in general. The multitudes of innovations in human societies are not independent and isolated; rather, all innovations are interleaved in one way or another with information exchange. The AST could be used to analyze the advent of various innovations such as the printed press, electricity, telegraph, mass transpirations, radio, telephone, TV, the Internet, etc., and show how the structures of these innovations penetrated the respective societies, influencing them, and how the social structures of those societies in turn influenced and modified innovations' original intent. I will come back to this point later.

I concur with Desanctis that the decision-making and the institutional schools are not an appropriate explanation and mode of analysis if taken independently from each other. Technology's or society's impacts can't be unidirectional and isolated from their surroundings: "P2. Use of AIT structures may vary depending on the task, environment, and other contingencies that offer alternative sources of social structures" (Desanctis, p. 128). The actual process of innovation is based on social interaction. As such, new technologies come to light due to changes necessitated by some organizational and institutional forces, and the society.

It is an indisputable fact that managers must communicate with their employees, peers and superiors in order to motivate and lead their employees, to learn about the operating environments and make successful decisions, as well act in their role as figureheads, monitors, spokespersons, disseminators of information and facilitators (Trevino 1987, p. 71). In doing so, managers make conscious and unconscious decisions in choosing the ‘appropriate’ media (face-to-face, telephone, vide conferencing, audio conferencing, electronic mail, letter, memo, special reports, fliers and bulletins) for the communication task at hand. Trevino, basing her argument in the symbolic interactionist perspective, on the rich-lean media scale, implies that message equivocality, contextual determinants and symbolic cues conveyed by the medium itself above and beyond the literal message, determine manager’s media choice for particular communication task (p. 74). Alavi (2001) suggests that media choice directly influences social presence and task participation. This influence is lower in the conditions of established group than the zero-history groups (p. 375).

Many aspects of media choice for mediated-communication for use in workplace environments have evolved and developed since Trevino’s article was published in 1987. Similarly, the situational perception of various media has also changed and has been accepted by employees in organizations expected to utilize the media for work related activities. A suggested course of study could analyze the acceptance and the shifting of social presence of these media over time. I would like to argue that as people become more familiar, the same media (with the same technical characteristics) could be used for more equivocal interactions, even to the point where a medium perceived to be lean and a medium perceived to be rich cab be used interchangeably for the same equivocal message. This argument seems to be partially supported by Alavi when she suggests that a particular medium has different impact on established group vs. zero-history group conditions.

Cramton’s article identifies and analyses a multitude of problems constituting failures in the process of establishing and maintaining mutual knowledge (failure to communication and retain contextual information, unevenly distributed information, difficulty communication and understanding the salience of information, differences in speed to information access, and difficulty to interpret the meaning of silence), as well as few mechanisms for establishing and maintaining mutual knowledge (direct knowledge, interactional dynamics, and category membership). Both, the problems constituting failures and the mechanisms for establishing mutual knowledge have helped me explain team members’ behavior of project teams (dispersed and collocated) that I have been involved in the past, and appear to be good candidates for analyzing my involvement in future projects in workplace.

The definition of mutual knowledge as “the knowledge that communicating parties share in common and know they share” (Cramton 2001, p. 346) is an appropriate assumption base on various cultural, anthropological and communication studies, as well as from our everyday experience that we exchange information with others having in mind the contextual and situational background that help us understand and interpret each other. Only with common/shared understanding where interpretation and the meaning making process is compatible can we understand each other and actually communicate. In relation to organizational settings, the failure to establish and maintaining mutual knowledge has negative affects on dispersed team’s decision quality, productivity and relationships (p. 349).

Social Shaping of ICTs and Evaluation

| Permalink | 2 TrackBacks

Kling’s article addresses interesting issues in relation to how computing has effected social structures, both institutional (corporate and non-corporate) and public, and also how the underlying social structures have influenced computing. The article ought to be read in light of the fact that it was published in 1980 and that it is a meta-analysis. It examines various studies and research that have analyzed computing and computers from 1950-1979. Besides, we need to be mindful that the notion of computing and computers prior to 1980 was somewhat different that the way we perceive it today. Considering that there were 200,000 computers in use in US (Kling, p. 63), it gives us roughly one computer per one thousand people, with rough estimate of 200 million people living in the US.

In addition, the pervasiveness of computing technology of pre 1980 was very low compared to today. At that time, computers were mostly expensive central mainframes used by corporations, institutions and the government agencies, with terminal access only, and used strictly for business. The concept of personal computer as we know it today was only idea for the future. So, the actual ‘use’ of computers was perhaps few magnitudes lower than one thousand users per computer. Many users were only secondary users of computer functions/services usually via intermediary, such as police officers in the field during their work hours checking on police records via dispatchers. Further, the computer technology in pre 1980 was primarily used as data processing aid for cranking reports, statistical analysis and efficient and accurate reporting. This mechanical viewpoint of computers reinforces the idea that computers are like any other resource at manager’s disposal to be used for the goals of the institution and corporations, regardless if used for innovation, work, life, decision making or organization power.

“The new institutionalism in organization theory tends to focus on a broad but finite slice of sociology’s institutional cornucopia: organizational structures and processes that are industry wide, national or international scope” (Powell et al, p.9)

“Institutionalized arrangements are reproduced because individuals often cannot even conceive of appropriate alternatives (or because they regard as unrealistic the alternatives they can imagine. Institutions do not just constraint options: they establish the very criteria by which people discover their preferences. In other words, some of the most important sunk costs are cognitive” (Powell et al, p.11)

Starting from the premises of new institutionalism with its scope, constraints and criteria establishment, Orlikowski and Barley (2001) proceed to elaborate that information technology (IT) research and organization studies (OS) have much more in common than what has been already presented in scholarly communication and practice in both areas of study.

Considering that IT research is mostly practical in nature dealing with the design, deployment, and use of artifacts that represent tangible solutions to real-world problems (Orlikowski et al, p.146), and OS is theoretical as it develops and test parsimonious explanations for broad classes of phenomena (p.147), and that "organization studies (OS) and information technology (IT) are disciplines dedicated respectively to studying the social and technical aspects of organization" (p.146), they posit the differences between IT research and OS as epistemological in nature and not in the subject matter, treating the issues of organization at different level, emphasizing on the particular and the general respectively: "There can be no general knowing that is not somehow grounded in particulars and no particular explanation without some general perspective. Particulars are important for theory building, and theory is important for making sense of the specific" (p.147)

The translation process enables an actor/entity (simple or complex) to inscribe its properties and attributes onto other actors in the pertinent topologies. This suggests that there is a movement of some sort from one actor to another. Certainly, in any given topology not all actors are able to inscribe their properties and attributes equality into other actors. Some properties and attributes are more prevalent in any given topology. What determines the strength of the attributes and the properties?

As with all things in our lives, some things are prone to changes more than others. For example, when a new information system is brought into an organization, the appropriation process might modify the information system to a great degree to fit its needs. At other times, the organizational structure or tasks might change as a result of the appropriation of a system that does not allow much modification to its pre-defined functionality.

It appears that properties and attributes of actors can be grouped in at least two groups: 1) intrinsic properties and attributes - those that are not modifiable as a result of links to other actors, 2) external properties - those that have been acquired and appropriated through the modification/translation process and are further modifiable.

Translation in actor-network

why actor-network?

why actor-network?

| Permalink | 5 TrackBacks

In Social constructionism vs. technological determinism it has been suggested that the actor-network theory and its methodological framework may provide the language and the mode of explanation to elaborate in a common framework the interplay between human and non-human entities.

Most importantly, the major contribution of the actor-network theory seems to be the fact that it treats the human and non-human elements (or actors as the various element in a given topology are named in the actor-network language) alike as being able to influence each other.

For example, a network topology representing a department in a given organization may consist of various human and non-human actors such as employees, manager(s), inter and intra-departmental structures, communication channels, forms of communication, information and communication systems, meetings, tasks, routines, etc. All of these actors are linked to each other via links (single or multiple).

So, what next? Well, if actors are linked to each other they can potentially influence each other. For example, given the departmental structure, the manager has a direct link/communication with the employees and in many cases affects how the employees do their job. At the same time the employees may affect how the manager does his/her job regarding a particular project. However, the influence that the manager can exert on the employees perhaps is stronger than the influence any particular employee might be able to exert on his/her manager. Here we see an example of the actor 'structure' as a moderating actor in the communication/link between the manager and the employees.

Another example would be the use of a particular information system for performing certain project related tasks. If a particular system is already being used for given tasks, some limiting capabilities of the system when used for a similar task will effect how the task is performed by the employees. When cost becomes an issue (we can't always have the systems changed the way we want), the functionalities of a particular system might even define the departmental structure and the scope of the task. Here we see an example of an information technology actor/artifact having a say on how tasks are performed.

If actors in a given topology can effect each other, what are then the properties and the attributes of the actors and the links then can further help us elaborate and explain the nature of a particular topology?

Social constructionism vs. technological determinism

| Permalink | 5 TrackBacks

The discourse regarding the development and utilization of technology in general and information related technology in particular has for the most part swung between the technological determinism and social constructionism viewpoints.

Each of these viewpoints when taken separately and independently from each other present radical perspective, diminishing the other's theoretical and practical explication from their own. While these two different perspectives have answered many issues (answerable by their perspectives) regarding the development, innovation and use of information related technologies, a great amount of issues have remained unanswered due to the perceived complexity when attempted to be explicated independently either by the technological determinism or the social constructionism viewpoints. Or, when an attempt is made to explain an issue or a research problem either by the technological determinism or by the social constructionism theoretical and methodological frameworks independently, the resulting analysis and conclusions would not be complete. Why so?

For example, if one is to research the usability of collaboration tools in an organizational settings, the social constructionism for most part takes the view that the information and communication technologies are just tools to be used by the employees to perform their assigned tasks and that these tools do not effect the employees or the relevant social structures. On the other side, technical determinist consider the affect that these tools will have on the employees and the surrounding organizational structures resulting from their use.

It is almost obvious that in real life both the social structures affect the development and the design of information technology, and information technology on the other side affect the social structures and how we use them. More then often in our workplace we complain that we can't perform a particular task due to technological constrains emerging from the utilization of the technology we are supposed to use to get our work done. In response, if we can't modify the tools, we modify our processes and task so they are workable within the functionality provided by these tools.

As neither one can provide a complete answer to such issues as the usability and utilization of collaboration tools, a common ground between social constructionism and technological determinism needs to be appropriated.

Perhaps the actor-network theory and its methodological framework provides a plausible alternative?

By Mentor Cana, PhD
more info at LinkedIn
email: mcana {[at]} kmentor {[dot]} com

About this Archive

This page is an archive of recent entries in the Social Construction category.

Research Process is the previous category.

The 'other' category is the next category.

Find recent content on the main index or look in the archives to find all content.