July 2004 Archives

In P2P TV - How Independent News Video Producers Will Bypass The Mainstream TV Networks Robin Good brings forth an interesting and almost self evident argument about the potential effect of P2P TV to empower the masses by bypassing the mainstream TV networks.

To further support this position, here are some thoughts build upon Gitlin's (1980), Schiller's (1996), Streeter's (1996) and Fiske's (1996) arguments, emphasizing open communication (i.e. many-to-many) is the liberating technology from the central grip in the way this have been setup so far.

Evident from Gitlin’s and Schiller’s arguments is their emphasis on the necessity of free and open communication among the masses if there is to be any deliverance from the ‘claws’ of the media. On the contrary, it is the one-way communication (radio, TV, cable) utilized by the elites to achieve the subordination and dissemination of the hegemonic ideology. Fiske’s technologised surveillance of the physical goes hand-in-hand with surveillance of the discourse (what issues are raised on TV, radio, etc.) “because unequal access to those technologies ensures their use in promoting similar power-block interests" (Fiske 1996, p. 218). The important point brought forth here, directly or indirectly, is the identification of the closed, unidirectional (with masses on the receiving end) and restricted access of communication technology.

These aspects are identified as necessary characteristics for the maintenance and reproduction of the hegemonic ideology, enabling the elites to set the form, format and content of the public discourse (broadcasting, TV, radio, press, etc.), and as importantly decide who can participate. Therefore, it can be argued that this manifestation of communication technologies, entangled in the web of one-way communication and used by the elites for power control and dissemination of material in support of the hegemonic ideology, has shaped the traditional scholarly and public discourse, as well as their practical use, to view communication technology as intrinsically embedded with features, characteristics and functionalities, for reinforcing and aiding the hegemonic ideology.

This biased view, that communication technologies are inherently suited to help media control, is troublesome and factually wrong. For example, the scholarly and public discourse on early cable technology shows that cable access was intended for use unlike it is being used today (for dissemination popular consumer culture through its various formats with the aims of making profit). Streeter (1997) argues that cable "had the potential to rehumanize a dehumanized society, to eliminate the existing bureaucratic restrictions of government regulation common to the industrial world, and to empower the currently powerless public" (Streeter 1997, p.228). He further notes that the cable system had the potential to enable two-way communication and interactivity, but apparently failed to do so due to the social (un)response on the part of the audience: "Cable television was something that could have an important impact upon society, and it thus called for a response on the part of society; it was something to which society could respond and act upon, but that was itself outside society” (Streeter 1997, p. 225). And then adds that cable should not be viewed as an “autonomous entity that had simply appeared on the scene as the result of scientific and technical research" (Streeter 1997, p. 225). Here we see a distinction between the current social status of cable as profit making machinery and its potentials to have become socially responsible technology that would have empowered the audience with two-way open communication.

Fiske, J. (1996). Media matters: Race and Gender in U.S. Politics. Minneapolis: University of Minnesota Press

Gitlin, T. (1980). Chapter 10, “ Media Routines and Political Crises.” In Gitlin, The Whole World is Watching (pp. 249-269). Berkeley: University of California Press.

Schiller, H.I. (1996). Information Inequality: The Deepening Social Crisis in America. New York - London: Routledge

Streeter, T. (1996). Selling The Air: A Critique of the Policy of Commercial Broadcasting in the United States. Chicago: University of Chicago press

quantum information science

| Permalink

The following article Rules for a Complex Quantum World: An exciting new fundamental discipline of research combines information science and quantum mechanics presents a fundamental new way of looking at information science. As a framework in making, it builds upon Shannon's information theory and Buckland's "information-as-thing", as well as quantum physics. It appears that this approach is closer to physics than the contemporary information science studies that deal primarily with information from the meaning making viewpoint.

Could this pave the way for the groundwork towards the unified theory of information?

The 'digital divide' and the rest of the population

| Permalink

It seems as if the discourse regarding the reduction or the elimination of the 'digital divide' gap has become a fashion and a trend of a sort. Apparent from the discourse and various tasks aimed at narrowing the gap of the digitally haves and have nots are the forgotten ones, the portion of the population in any society (country, region, etc.) that will probably never get online for variety of reasons.

While the aim of the Maltan government is a genuine one as expressed in the following article (New IT strategy launched to eliminate digital division) with the necessarily inclusion of relevant civic organizations alongside government and corporate organizations: "The Prime Minister and Minister explained that this strategy came about through a wide process of consultation following the setting up of National Council for Information Society (NISCO) which is made up of the governments, unions, political parties, members of civic society and industrial organizations and technology", there is a real concern that the digital divide gap might increase even further by shifting all the efforts towards the 'digital realm' by reducing the attention in the 'non digital realm'.

Considering that a portion of the population will never catch the digital train, an ever emphasis of the 'digital realm' will disenfranchise great many people. It is all well to want everyone on the digital train, serving the public might become more efficient. However, it should not be forgotten that many people will not catch the digital train in their lifetime and they should not suffer because of that. Imagine going to a government office and they tell you that you have to navigate a complex computerized menu systems to obtain certain information, and you have never touched a computer in your lifetime, or you only know how to send e-mail?

Hidden costs of open source

| Permalink

Upon reading Hidden costs of open source one starts wondering as to what are the 'hidden costs' the article insinuates? The author suggests that the cost associated with learning how to use (install, maintain, and run) a particular software is a hidden cost.

"There we are. Cost again. If it's so easy to use and it is reliable (one assumes it's reliable since apparently Nasa is using it to run mission critical applications, although that would put me off becoming an astronaut), why am I asked to shell out $1,500 for entry-level support? And support costs can go as high as $62,400 - hardly a cheap option."

But this is nothing new with either commercial packages or open source software. Using any software that is complicated requires learning and maintaining, independently if it is closed or open source. The expense of learning and maintenance hardly classifies as 'hidden cost'. And guess what, you don't have to by the support from the actual developers of the open source. You can learn it on your own and do it yourself, or hire other competitive training and support consultants. Sometimes you wonder why this article is even published as a serious discussion point. Hmm…

the social construction of Unix, C, and Linux

| Permalink

From Unix's founding fathers:

"It is that interplay between the technical and the social that gives both C and Unix their legendary status. Programmers love them because they are powerful, and they are powerful because programmers love them. David Gelernter, a computer scientist at Yale, perhaps put it best when he said, “Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defence against complexity.” Dr Ritchie's creations are indeed beautiful examples of that most modern of art forms."

My emphasis in bold; couldn't have said it better. After all, we knew that coders and programmers are not "lone scientists". :)

finding open source code

| Permalink

From IST Results - Swift searching for open source:

Finding the open source code you need can often seem like searching for a needle in a haystack. But with the development of the AMOS search engine finding your way through today’s maze of software code has just become considerably easier.
Aimed at programmers and system integrators but with the potential to be used by a broader public, the AMOS system applies a simple ontology and a dictionary of potential search terms to find software code, packages of code and code artefacts rapidly and efficiently. In turn it assists open source program development through making the building blocks of applications easier to find and re-use.

introducing the Common Information Environment

| Permalink

From Towards the Digital Aquifer: introducing the Common Information Environment:

Google [1] is great. Personally, I use it every day, and it is undeniably extremely good at finding stuff in the largely unstructured chaos that is the public Web. However, like most tools, Google cannot do everything. Faced with a focussed request to retrieve richly structured information such as that to be found in the databases of our Memory Institutions [2], hospitals, schools, colleges or universities, Google and others among the current generation of Internet search engines struggle. What little information they manage to retrieve from these repositories is buried among thousands or millions of hits from sources with widely varying degrees of accuracy, authority, relevance and appropriateness.
This is the problem area in which many organisations find themselves, and there is a growing recognition that the problems are bigger than any one organisation or sector, and that the best solutions will be collaborative and cross-cutting; that they will be common and shared. The Common Information Environment (CIE) [3] is the umbrella under which a growing number of organisations are working towards a shared understanding and shared solutions.

socio-political and economical twist to open source

| Permalink

Personal view: Open source may be next business revolution reviews the new book "The Success of Open Source" by Stevens Weber, a professor of political science at the University of California at Berkeley.

Have not read this book yet, but it seems like interesting reading from this article. Here are some excerpts:

"His claim, and it's a bold one, is that this isn't just a good way of developing software, it's a new way of organising businesses. Open-source software breaks the links between developing a product and owning a product, which is the way business has traditionally organised itself. That could have startling consequences.
It's rare to find a professor of politics discussing software. "People in academic subjects are very conservative about their disciplines," Weber says. "So people are intrigued, but also a little bit nervous about an approach like this."

"Think back to the invention of the steam engine. By the standards of the time, building a railway was so complicated and so costly that none of the existing organisational forms could handle it. So the joint-stock company and the stock exchange rose to prominence. Something similar may be happening now."

accessing the "collective intelligence"

| Permalink

Commenting on George Por's article, Steven Cohen discusses the value of blogging and other tools supporting collaboration in building a collective intelligence.

While we have many blogging and other social software tools that enable the 'creation' of the collective, how do we harness the "collective intelligence" once it is 'there'/'built'? It would seem that other tools would be needed to enable quick and relevant utilization of the collective intelligence. So far, it appears that the blogging tools have done a great job enabling the representation of the collective intelligence. They lack the function as enablers for utilizing the available collective knowledge.

It seems that the next wave of social network and collaboration tools will/should concentrate more on the function of finding relevant and appropriate 'intelligence' somewhere in the collective pool. Needless to say that search engines are not best suited for this type of activity since they concentrate primarily on topical relevance and do little to nothing about spatial, temporal, methodological, contextual, process, and task specific relevance.

Alan Kay's food for thought regarding personal computing

| Permalink

Alan Kay's food for thought as reported in A PC Pioneer Decries the State of Computing, regarding personal computing:

But I was struck most by how much he thinks we haven't yet done. "We're running on fumes technologically today," he says. "The sad truth is that 20 years or so of commercialization have almost completely missed the point of what personal computing is about."

But what about all those great things he invented? Aren't we getting any mileage from all that? Not nearly enough, Kay believes. For him, computers should be tools for creativity and learning, and they are falling short. At Xerox PARC the aim of much of Kay's research was to develop systems to aid in education. But business, instead, has been the primary user of personal computers since their invention. And business, he says, "is basically not interested in creative uses for computers."

Note the emphasis that computers could/should have been used more for creative process and learning. The potential is there, however, the social construction of the computing technologies has been mostly lead by commercial goals. Thus, the interplay of computing technology and social structures has mostly served commercial interest and less so with the potential of creativity, inventions and innovation.

The question arises then how to get to more creative use of technology for learning and novel ways of innovations? Open source computing perhaps, where computing tools geared more towards learning that act as stimuli for creative innovation. But then, anything creative that can make money is imprisoned within the commercial realm and looses it potential for learning and creativity. A way needs to be found such that creativity is left to bloom within its realm free from commercialization. Proprietary software (due to being in closed environment) is responsible for slowing down innovation and creativity. I would say: the way is towards open computing …

The Role of Children in the Design of New Technology

| Permalink

The Role of Children in the Design of New Technology

Children play games, chat with friends, tell stories, study history or math, and today this can all be done supported by new technologies. From the Internet to multimedia authoring tools, technology is changing the way children live and learn. As these new technologies become ever more critical to our children’s lives, we need to be sure these technologies support children in ways that make sense for them as young learners, explorers, and avid technology users. This may seem of obvious importance, because for almost 20 years the HCI community has pursued new ways to understand users of technology. However, with children as users, it has been difficult to bring them into the design process. Children go to school for most of their days; there are existing power structures, biases, and assumptions between adults and children to get beyond; and children, especially young ones have difficulty in verbalizing their thoughts. For all of these reasons, a child’s role in the design of new technology has historically been minimized. Based upon a survey of the literature and my own research experiences with children, this paper defines a framework for understanding the various roles children can have in the design process, and how these roles can impact technologies that are created.
(Full Paper in PDF)

open access a danger to professional societies?

| Permalink | 1 Comment

This is a follow-up to my previous entry (A shift in scholarly attention? From commercial publishing to open access publishing) prompted by Open Access? Some Sparks Fly at ALA. (thanks to Open Access News).

In the article, IEEE's Durniak makes the following unsubstantiated statement: "Free open access runs the risk of destroying professional societies."

One can do an extensive analysis to show that the above statement is not necessarily true. However, it suffices to note that commercial publishers are only one of the actors in the scholarly publishing cycle. As such, the totality of the functions performed by the commercial publishers can definitely be taken over by the professional societies themselves, or perhaps by a non-profit umbrella organization that would deal with scholarly publishing for various professional societies.

It is really unprecedented and uncalled for the commercial publishers to claim that without them the entire scholarly publication process will fail and that professional societies will be destroyed. It is indeed true that the commercial publishers provide value added services. However, none of these value-added services are outside of the competency of the professional societies themselves, especially with all the open source software available. Even if it means that the processional societies would have to hire IT staff to deal with the maintenance of the process, it would definitely be less costly than the cost to the host institution for buying back the intellectual output of their staff.

Sooner or later, the commercial publishers will have to relax a bit and see how they can honestly contribute in the process to moving to open access. Their stakeholders might not be happy, but, hey, the dynamic is changing and the power base is shifting.

Can it ever get more clearer than this argument why the publishing of scholarly work should not be in the hands of commercial entities? From A Quiet Revolt Puts Costly Journals on Web:

"Elsevier doesn't write a single article," said Dr. Lawrence H. Pitts, a neurosurgeon at the University of California at San Francisco and chairman of the faculty senate of the 10-campus system. "Faculty write the articles for them, faculty review the articles for them and faculty mostly edit the journals for them, and then we get to buy the journals back from a company that makes a very large profit."

It appears that the players in the process of scholarly publishing (scholars, editors, publishers, etc.) are well aware that the current (i.e. commercial publishing) process will not be sustainable for long. Fueled by the openness of the Internet, scholars and academics have the necessary technology and expertise to publish without the involvement of commercial entities. The money that today is eaten as profit by the commercial entities can definitely be used for further research and academic pursuits.

In the process of the inevitable move from commercial publishing to open access, undoubtfully the entire dynamic of the publishing process will change. But change is not bad. A lot of realignments will occur. The moment established scholars start publishing in open access publications, the tide will turn.

Or, if there is resistance, a shift in the problems addressed by a certain filed or a discipline might occur towards those addressed in the open access journals due to their wider distribution and open access. It would appear then that the move towards open access publishing might even realign the types of problems addressed by a certain scholarly community.

An important analysis in this respect is presented by Kling et al. It suggests that the medium of information transfers and exchange (paper vs. electronic) might induce a shift in the scholarly discourse of a particular discipline. They argue that the highest status scientists usually publish in well-established journals that at the same time usually define the scope and the problems of the field (Kling et al., p.10). Then, the scientists and scholars with a status just a little under the scholars of the highest status are likely to publish in an e-journal (usually open access) due to its speed of distribution and perhaps visibility due to very large readership (Kling et al., p.10). What this could do is that if enough second tier scientist start publishing in e-journals sooner or later the interests and the problems treated in those e-journals for a particular discipline might shift away from the problems treated in the paper journals, due to the speed of distribution, while gaining legitimacy and perception of good quality. This would also mean that the medium is the message (in McLuhan’s sense) where the medium appears to shift the scholarly discourse of a field/discipline.

Kling, R. and Covi, L. M. (1995). Electronic Journals and Legitimate Media in the Systems of Scholarly Communication, The Information Society, 11 (4) 261-271 (Accessed at: http://www.slis.indiana.edu/TIS/articles/klingej2.html)

By Mentor Cana, PhD
more info at LinkedIn
email: mcana {[at]} kmentor {[dot]} com

About this Archive

This page is an archive of entries from July 2004 listed from newest to oldest.

June 2004 is the previous archive.

August 2004 is the next archive.

Find recent content on the main index or look in the archives to find all content.