Here is my research summary for CSST 2010.

Cool! I have been accepted to participate in the Summer Institute for the Consortium for the Science of Socio-Technical Systems (CSST) 2010 , funded by NSF. I'm looking forward to meeting some recent smart PhD graduates as well as the instructors and the faculty. The event will take place at Skamania Lodge, Stevenson, WA.

My dissertation abstract

| Permalink

Title
Open Access Repositories in the Cultural Configuration of Disciplines
Applying Actor-Network Theory to Knowledge Production by Astronomers and Philosophers of Science

Abstract
This qualitative study provides an understanding of the role of self-archived disciplinary open access repositories in the cultural configuration of scholarly disciplines. It examines the implications of the technological and organizational layers of access tools and open access repositories and researchers' lived experiences and perceptions layer on researchers' localized knowledge production context and the construction of disciplinary knowledge production contexts. The actor-network theory, which posits that technological and social actors reciprocally affect each other, is applied to compare and contrast the information practices of two groups of researchers: the use of arXiv by astronomers, and the use of PhilSci by philosophers of science. Six astronomers and five philosophers of science were identified through purposeful selection. The interviews with the researchers were conducted over a period of five months, ranging in length between 40-75 minutes. Primary documentary evidence, describing open access repositories and access tools, is also used for the analysis. The findings show that the open access repositories, the access tools, and researchers' individual knowledge production contexts are co-constructed as researchers search, discover and access scholarly artifacts. Open access has impacted researchers' knowledge production by realigning the existing processes and by instigating the emergence of new actors and constructs. Four themes emerge as researchers articulate their perceptions about the value and the role of open access: impact on scholarly process, impact on scholarly output, integration with scholarly context, and democratization of the scholarly discourse. Congruent with the domain-analytic approach, two distinct socio-technological models emerge. Astronomers perceive arXiv as important and critical in their scholarly information practices, with a central role in their discipline. However, philosophers of science perceive PhilSci as having a limited value in their scholarly information practices and rather minimal role in their discipline. The properties of disciplinary cultures, such as the mutual dependence between researchers and the task uncertainty in a specific discipline, are implicated in the appropriation of the open access repositories and access tools at individual and disciplinary level. The socio-technological co-constructionist approach emerges as a viable theoretical and methodological framework to explicate complex socio-technological contexts.

Done with my dissertation

| Permalink

This past Tuesday I passed my dissertation defense and presented my public dissertation this afternoon. Yes, I'm all done! :) Hopefully now I will have more time to keep writing here.

Dissertation title:
Open Access Repositories in the Cultural Configuration of Disciplines:
Applying Actor-Network Theory to Knowledge Production by Astronomers and Philosophers of Science

In 'response' to Theories informing my research, I would like to bring to attention another issue of concern regarding the empowering or restrictive properties the tacit and explicit theories have on individual's way of thinking and research.

Sooner or later many of us are guided by set of theories, frameworks and paradigms in our research work, some of them tacit and some explicit. They direct our research within the appropriate and relevant scholarly community, thus increasing the chances for scholarly collaboration and communication with like-minded folks.

However, the same theories, paradigms and frameworks also limit our imagination and innovative thinking, they create the box within which we think and operate. Thus, they can have potentially negative effect by filtering away problems and issues that merit scholarly scrutiny but are not scrutinized because our mode of thinking does not allow them to reach us.

In this sense, the explicit theories and frameworks we subscribe to are perhaps less inhibitive to our abilities to explore and innovate beyond our current interests. We are well aware of the explicit theories, we use them to conduct our research, and we can decide to go beyond.

The tacit theories seem to be more inhibitive than the explicit. Because of their tacit nature they direct our research in a way we might not be aware and thus do not know how to go beyond and expand our mode of thinking.

Certainly, there is a benefit in structured way of thinking and research; its awareness helps us position ourselves and our work within the relevant communities of practice. However, often a times the excessive structureness in our way of thinking might be depriving us of the ability to see various phenomena with a new 'eye'.

How does one go about identifying and discovering his/her tacit theories, frameworks and paradigms?

(Originally published Nov 18, 2004)

back after some time away from my blog

| Permalink

After few years of quiet from my blog, I finally think I have some time to write again. Not that I have not been writing for the past two years. I have actually been writing more but around my qualifying exam and my dissertation proposal. Finally, after passing my qualifying last year, I'm almost done with my proposal. One more meeting with my committee and should be ready to start data collections and work on some preliminary data analysis.

I'll write more about this but my dissertation is about scholar's interaction with open access repositories. Given that OA is a new phenomena in the scholarly communication, I thought it would be valuable to understand it in greater depth.

What if they gagged Gutenberg? Big telecom is trying to throttle free access to democratic Internet

Excerpts:
Five-hundred years ago, we had Johann Gutenberg, a German metalworker and inventor who pioneered the precursor to the Internet. His printing press became the first practical mass communications medium utilizing what was then an advanced memory technology -- paper.

Soon after, there was Martin Luther, a German theologian and priest who fervently believed the church had departed from the teachings of the Bible. In 1517, Luther began printing pamphlets condemning the church, and within several months his 95 Theses was being read all over Europe.

...

Imagine if the leaders of 16th century Germany, feeling threatened by the democratizing forces of the printing press, had taken Gutenberg's invention and limited its use to those they politically agreed with -- or if Luther had to pay licensing fees for nailing up his 95 Theses on every church door in Germany.

That's what big telecom is trying to do: shut the democratic architecture of the Internet. By creating two "tiers" -- one that is fast and charges fees to Web site owners -- and a second class Web that is cheaper and slower and could limit access to independently run sites -- big telecom is hoping to make a larger profit off the Internet.

In other words, opponents to the Internet's open and free access are trying to change the rules -- and they're trying to mislead you, claiming that they're against regulation and that they only want you to pay for the rising cost of their "pipes." That's information warfare.

Open Content Alliance Rises to the Challenge of Google Print

| Permalink

Open Content Alliance Rises to the Challenge of Google Print

Excerpt:

October 3 , 2005 — What a great idea! Why didn’t we think of that? Google Print’s ambitious effort to digitize the world’s book literature has inspired others to initiate their own effort. And, with the Google Print program caught in the snag of a copyright lawsuit, the sight of a relay race handoff keeps hope burning for a brighter digital future. The just announced Open Content Alliance (OCA; http://www.opencontentalliance.org) creates an international network of academics, libraries, publishers, technological firms, and a major search engine competitor to Google—all working on a new mass book digitization initiative. The goal of the effort is to establish a flexible, open infrastructure for bringing large collections of digitized material into the open Web. Permanently archived digital content, which is selected for its value by librarians, should offer a new model for collaborative library collection building, according to one OCA member. While openness will characterize content in the program, the OCA will also adhere to protection of the rights of copyright holders.

OCA founding members include the Internet Archive; Yahoo! Search; Hewlett-Packard Labs; Adobe Systems; the University of California; the University of Toronto; the European Archive; the National Archives (U.K.); O’Reilly Media, Inc.; and Prelinger Archives. The Internet Archive (http://www.archive.org), which is led by Brewster Kahle, will provide hosting and administrative services for a single, permanent repository. Technological and some financial support will come from Adobe and Hewlett-Packard. Yahoo! Search will supply initial search engine access as well as technological support and some funding.

Yahoo launches Creative Commons search

| Permalink

From Yahoo launches Creative Commons search:

Excerpt:
The Yahoo Search for Creative Commons makes it easier to locate Web content with a Creative Commons license. Creative Commons is a nonprofit organization that offers flexible copyrights for creative works. The group builds upon the traditional "all rights reserved" form of copyright to create a voluntary "some rights reserved" copyright, according to Creative Commons. Tools from Creative Commons are free and the organization offers its own search engine.

lists are dead? not really...

| Permalink | 1 Comment

To say that listservs (i.e. news lists and discussion lists) are dead is a bit premature. Discussion lists and news lists serve different needs than RSS and blogs, though they do interchange at certain levels. At best, they are complementary to each other.

For example, I have plenty of news and discussion lists subscriptions, as well as plenty of RSS feeds. Over the past year I have supplemented some of my news lists with RSS feeds whenever possible.

However, as far as discussion lists are concerned, RSS is no replacement for those. Some people do prefer to get their discussion lists in their e-mail, filtering each list into separate e-mail folders. The technical difficulty to setup e-mail filters is not harder than the setup of RSS feeds. Webboards also are not a total replacement for discussion lists. Rather, a generic mix of discussion lists and webboards have sprung.

Also, lets not forget that throughout the world there are plenty of places where broadband is not readily available and will not be available in the near future. Thus, e-mail discussion lists are much easier to deal with, since the e-mails come to you, vs. having to browse badly designed webboards with lots of graphics over slow dial-up connections.

So, rather than saying that listservs (that is lists) are dead, I think they will coexist with other tools such as RSS and Blogs and complement each other since their tasks are different.

why i'm in academia

| Permalink

apophenia: why i'm in academia is a very interesting and thoughtful post by Danah. More or less I could have written the same, I feel the same. Managing and balancing the industry experience and involvement, and pursuing academic path is not easy. But certainly challenging… individuals in such positions can act as catalysts for learning experiences in both directions.

OPENING THE GATES TO INFORMATION COMMONS

| Permalink

OPENING THE GATES TO INFORMATION COMMONS
(ShelfLife, No. 189 (January 13, 2005) ISSN 1538-4284
http://www.rlg.org)
While respecting the right of corporations to charge for information, some information professionals are calling for fewer restrictions on its distribution and are lobbying for, or actively participating in, the creation of "information commons" -- a new way of producing and sharing information, creative works and democratic discussions. Like information portals, these "commons" (drawn from the historical existence of the English commons -- pieces of land to which members of a community had specific rights of access) are digital repositories of thematically related information. The information may include everything from scholarly journals to information on knitting. However, instead of being run by corporations, they tend to be run in a collective manner by like-minded individuals -- associations or university departments for instance -- and they are accessible to all. Proponent Marjorie Heins, a former American Civil Liberties Union lawyer and founder of Free Expression Policy Project, doesn't support free distribution of all information; her main concern is the "copyright mentality" that sees media giants attempting to squeeze the last dollar out of all content they control "rather than striking a more reasonable balance between fair return for effort and tying up information... The balance has gone awry." (Information Highways Nov-Dec 2004)
http://www.econtentinstitute.org/issues/ISarticle.asp?id=157663&story_id=42354113250&issue=11012004&PC

From Preparing tomorrow's professionals: LIS schools and scholarly communication:

How are LIS schools preparing tomorrow's academic librarians to deal with the emerging changes in scholarly communication? What more can they do? In this brief overview, we will look first at specialized courses dealing with various aspects of scholarly communication that have been added to the curriculum in many schools. The next section will look at how existing courses have been modified to include scholarly communication. Finally, we will explore the benefits of field experience, graduate assistantships and participation in institutional projects.

The authors present some interesting insights about the type of current curricula throughout the US schools.

As a conclusion, I think that there should be a stronger emphasis on the role and the implication of digital libraries (DL) and open access (open content, open communication) in scholarly communication. Understanding DLs both as social as well as technological constructs is important because most of the scholarly communication is mediated through some flavor of DL. Knowledge about open access (and open content, open communication) is critical because as an actor in the web of scholarly communication, the concept of openness as related to content and access seems to be influencing and shifting the research focuses of many disciplines.

Internet Archive to build alternative to Google

| Permalink

From Internet Archive to build alternative to Google:

Excerpts:
Ten major international libraries have agreed to combine their digitised book collections into a free text-based archive hosted online by the not-for-profit Internet Archive. All content digitised and held in the text archive will be freely available to online users.

Two major US libraries have agreed to join the scheme: Carnegie Mellon University library and The Library of Congress have committed their Million Book Project and American Memory Projects, respectively, to the text archive. The projects both provide access to digitised collections.

The Canadian universities of Toronto, Ottawa and McMaster have agreed to add their collections, as have China's Zhejiang University, the Indian Institute of Science, the European Archives and Bibliotheca Alexandrina in Egypt.

The magic that makes Google tick - a little Google arrogance?

| Permalink

The magic that makes Google tick, an article worth reading if you are interested to learn how things work behind the scenes before and after you type your query into Google's search box.

Among the nicely said things in the article, here is a quote that is a bit sarcastic and arrogant:

The job is not helped by the nature of the Web. "In academia," said Hölzle, "the information retrieval field has been around for years, but that is for books in libraries. On the Web, content is not nicely written -- there are many different grades of quality."

Surely Google has done a lots of progress in implementing IR knowledge to a very practical problem, but isn't it a show or arrogance to claim that academia has not helped (directly or indirectly) Google with their search technology?

In A prologue in form of a dialog between a Student and his (somewhat) Socratic Professor, Latour presents some basic but very important ideas, and clarifies some misconceptions and misunderstanding about what actor-network and/or ANT is and is not, and what and what not it can do for you.

The dialog is philosophical at times, brings forth challenges for all of those who deal with actor-network/ANT in some shape of form. It does not seem that Latour answers the question posited at the beginning about what actor-network can do for you, but it certainly tells you what it cannot and what it is not.

In any case, whether you agree or not with Latour's take on what actor-network/ANT should be and what seems to have become, this reading will certainly clarify and reinforce your way of thinking about this theory and methodology.

The Role of RSS in Science Publishing

| Permalink

December's Issue of D-Lib Magazine brings and interesting article regarding the implication of RSS in the science and research publishing. The Role of RSS in Science Publishing is worth reading. Yet another practical example of how blogs have brought forth a tool that can change the nature of the web as it is traditionally known. Website are no longer the static domains, RSS helps the sites be distributed widely, most importantly as a two-way communication.

The following few paragraphs where prompted from a discussion with a colleague of mine about the philosophical link to/from information science.

Well, I think that any practical disciplines or field of study is definitely informed by some philosophical discourse, even when the discipline itself does not acknowledge it, or does not seem to see it. In this sense, the field of Information Science(s)/Studies seems to lack an acknowledged philosophical grounding, even thought there are some obvious links of philosophical discourse. Imagine, many books and articles regarding information science do not emphasize the philosophical links (or if they do, they do so scantly, superficially and individualistically), or just start with practical issues, as if the phenomena treated by information science become part of the discourse just like that? Part of the phenomena treated by a discipline or field of study do emerge from practical problem, however, we should not neglect the phenomena that could arise from the philosophical discourse. The philosophical link might not be an obvious one, or it might not seem as a valuable enterprise worth research, thus, what would be the point in pursuing such a link for scholarly work. However, there could as well be very beneficial links.

Understanding the philosophical fundamentals/groundings that have informed and are informing information science/studies (implicitly or explicitly) might lead to a better understanding of the common elements that give rise (or are constitutive elements) to the phenomena treated by information science, thus, might provide us with a more coherent framework to treat such phenomena... to be continued...

SCIENTISTS, CONSIDER WHERE YOU PUBLISH

| Permalink | 1 Comment

SCIENTISTS, CONSIDER WHERE YOU PUBLISH posits challenging issues every author of research papers should starting thinking about. It isn't simple any more to assume that the most prestigious journals are the best venue to publish your research. So what if you have published in a prestigious peer-reviewed journal and not many people can read what you have written due to its subscription cost? How long can this continue? Could this provide some incentive for scholars to publish in open access journals? What then? It is quiet possible that articles published in open access journals might be able to shift the focus of a discipline or a field of study because of their wider availability and accessibility.

Excerpt from the above mentioned article:
For scientists, publishing a paper in a respected peer-reviewed journal marks the culmination of successful research. But some of the most prestigious and soughtafter journals are so costly to access that a growing number of academic libraries can't afford to subscribe. Before submitting your next manuscript, consider a journal's access policy alongside its prestige - and weigh the implications of publishing in such costly periodicals. Two distinct problems continue to plague scientific publishing. First, institutional journal subscription costs are skyrocketing so fast that they outstrip the ability of many libraries to pay, threatening to sever scientists from the literature. Second, the taxpaying public funds a terrific amount of research in this country, and with few exceptions, can't access any of it. These problems share a common root - paid access to the scientific literature.

Open Source Software and Libraries Bibliography

| Permalink | 1 Comment

Open Source Software and Libraries Bibliography

An interesting and very extensive bibliography on open source and digital libraries. A great resource!

How to smash a home computer

| Permalink

How to smash a home computer:

This is just funny! It is very revealing though, despite the problems with IT technology, it shows that human actions and social contexts are the main culprits for data loss.

About the Potential of E-democracy

| Permalink

Very interesting thoughts and ideas. Certainly, in the past technology has been a great source of change; maybe the technologies of today that embody the concept of openness could initiate another socio-economical-political change across the globe.

About the Potential of E-democracy

Abstract
This paper develops a reflection on the potential of E-democracy to strengthen society's democratization exploring historically and technically the possibilities of cooperative organizations. From Singer's historical view about the rise of capitalism it is conjectured that Internet and E-democracy could be the technological innovations capable to trigger off the creation of a virtual network of cooperative organizations and thereby the development of a new economic system, based more on humanitarian values than the present ones.

Is Open Source the new cell phone?

| Permalink

From Is Open Source the new cell phone?:

Excerpt:
Or Internet? Or Operating System?

Flash forward to 25 years from now – will we look back in disbelief at a time when people didn't completely trust Open Source? When all of the dominant technologies in our lives are built on Open Source models (if they aren't already) what will the history books say about the slow adoption rates of Open Source at the turn of the century? The answer won't be available for some time, but what we can do is examine the question.

Results from a survey conducted by VA Software Corporation (NASDAQ: LNUX) has revealed that executive resistance to Open Source may be hindering greater adoption of Open Source development methods for internal software development. As a result, many enterprises are failing to capitalize on the benefits of Open Source development processes and techniques.

presenting at ASIS&T 2004

| Permalink

Whoever is reading this, just to let you know that I will be presenting at the Annual ASIST&T Conference "ASIST 2004 Annual Meeting; "Managing and Enhancing Information: Cultures and Conflicts" (ASIST AM 04), " in Providence, RI, on November 16th, 2004, at 5:30p-7:00p.

As a part of a panel titled Diffusion of Knowledge in the Field of Digital Library Development: How is the Field Shaped by Visionaries, Engineers, and Pragmatists?, I’ll be “theorizing on the implication of open source software in the development of digital libraries”.

Will you be there?

Panel Abstract:
“Digital library development is a field moving from diversity and experimentation to isomorphism and homogenization. As yet characterized by a high degree of uncertainty and new entrants in the field, who serve as sources of innovation and variation, they are seeking to overcome the liability of newness by imitating established practices. The intention of this panel is to use this general framework, to comment on the channels for diffusion of knowledge, especially technology, in the area of digital library development. It will examine how different communities of practice are involved in shaping the process and networks for diffusion of knowledge within and among these communities, and aspects of digital library development in an emerging area of institutional operation in the existing library institutions and the specialty of digital librarianship. Within a general framework of the sociology of culture, the panelists will focus on the following broader issues including the engagement of scholarly networks and the cultures of computer science and library and information science fields in the development process and innovation in the field; involvement of the marketplace; institutional resistance and change; the emerging standards and standards work; the channels of transmission from theory to application; and, what 'commons' exist for the practitioners and those engaged with the theoretical and technology development field. The panelists will reflect on these processes through an empirical study of the diffusion of knowledge, theorizing on the implication of open source software in the development of digital libraries, and the standardization of institutional processes through the effect of metadata and Open Archive Initiative adoption.

The panel is sponsored by SIG/HFIS and SIG/DL”

Educationists Hail Open Source

| Permalink

From Educationists Hail Open Source:

"There is a growing belief that the wide-ranging benefits of ICT can be delivered to Africa's tertiary education sector only through the strategic adoption of open standards, free and open source software, and open content."

To the list I would also add open communication as an enabling process. Also, the above is not only true for Africa, but for the rest of educational systems throughout the world as well.

Richard Stallman on The great divide between free and open source software:

Without these freedoms, using software presents people with ethical dilemmas. If a neighbour sees you running a program, realises it would be useful and asks for a copy, what do you do? If the program isn’t free, you have to choose between two evils: either be a bad neighbour by not helping, or violate the software licence. The latter is the lesser evil, he argued, because the organisation supplying the software has already done something bad to you by supplying proprietary software, but you would still be going back on your promise. Furthermore, you are spreading more copies of non-free software that will present a similar dilemma to the recipients. The answer, said Stallman, is to only use free software.

BBC launches open-source video technology

| Permalink

From BBC launches open-source video technology:

The corporation has gone to great lengths to avoid any patent problems, and has used tried and tested techniques that have prior art. "We are reviewing the literature and will code round the problems as they arise."

To protect the software and the techniques used to develop it, the BBC has taken out its own defensive patents, said Davies, and is releasing the software under the Mozilla licence to ensure "that those patents are licensed for free, irrevocably, for ever."

The terms of the licence mean that Dirac could be used in open source software, said Davies, or in proprietary software in such a way that the company producing that software would not have to divulge their source code.

This is great news! Needless to say, this means fewer restrictions for innovation and development of new ideas and tools. The resulting ripple effect could encourage more open video communication because independent video producers will not have to carry the cost burden of their tools.

Open Source and Open Standards

| Permalink

Open Source and Open Standards provides a brief 'compare and contrast' between open source and open standards, and the pros and cons associated with each concept and practical implementations.

From PC Pro: News: UN body promises greater recognition for open source licencing:

The World Intellectual Property Organization (WIPO) is promising greater recognition of Free and Open Source software licensing in a bid to balance the needs of copyright owners and the public.
A group of Non-Governmental Organisations led by the Consumer Project on Technology (CPTech) successfully lobbied WIPO in its 'Geneva Declaration', resulting in a 'development agenda' that includes alternatives such as the GPL.
...
The group had also spent some time documenting WIPO meetings in order for the public to be better informed of the trademark, copyright, and patent policies being adopted that affect their every day lives.

Genome Model Applied to Software

| Permalink

Genome Model Applied to Software:

Open-source developers attempting to reverse-engineer the mysteries of private networking software turn to genomics research. They're applying algorithms developed by biologists to decipher the secrets of closed networks.

Do Open Access Articles Have a Greater Research Impact?

| Permalink

This paper (Do Open Access Articles Have a Greater Research Impact?) reports its findings that "freely available articles do have a greater research impact. Shedding light on this category of open access reveals that scholars in diverse disciplines are both adopting open access practices and being rewarded for it."

The findings of this paper have just confirmed what seems to be an obvious argument: the more open the accessibility to articles is, the more they will be used, and thus they ought to have greater impact in research and practice.

An additional question that needs to be addressed in this context is the overall impact of articles published in open access journals. It is quiet possible that articles published in open access journals might be able to shift the focus of a discipline or a field of study because of their wider availability and accessibility.

Political Agnosticism Open Source, Politics of Contrast

| Permalink

Political Agnosticism Open Source, Politics of Contrast is a MUST read article on the socio-economical, political and legal issues regarding the concepts of openness when looked through the 'open source' prism, and its interrelatedness to innovation, creativity, and free speech.

Excerpts:
FOSS, of course, beholds a complex political life despite the lack of political intention; nonetheless, I argue that the political agnosticism of FOSS shapes the expressive life and force of its informal politics.

FOSS gives palpable voice to the growing fault lines between expressive and intellectual property rights, especially in the context of digital technologies. While free speech and property rights are often imagined as linked and essential parts of our American liberal heritage, the social life of FOSS complicates this connection while providing a window into how liberal values such as free speech take on specific forms through cultural-based technical practice: that of computer hacking.
...
The technological potential for unlimited programmable capabilities melds with what is seen as the expansive ability for programmers to create. For programmers, computing in a dual sense, as a technology and as an activity, becomes a total realm for the freedom of creation and expression.

In essence, computing is understood and experienced (sometimes reflectively, other times implicitly) by FOSS hackers as the very micro-sphere for the unfettered circulation of thought, expression, and action that freedom within the macro-sphere FOSS seeks to achieve through licenses.

phd weblogs

| Permalink

From apophenia: phd weblogs:

"I just revisited phd weblogs which is a collection of PhD students blogging. There are only 170 of us on there and i know that there are a whole lot more. So, if you're an academic blogger and you're reading this, add yourself there. And tell your friends. It's really fun to surf and find out what other folks are researching."

"Oh, and it's a great way of procrastinating when you've read PhD comics so many time that you have half of them memorized."

open source comming to hardware

| Permalink

Open source comming to hardware:

"Can the open-source model be extended beyond software? It already has. In speaking today with Indian scholar Deepak Phatak, I learned about the "Simputer," introduced in 1998 and licensed under the Simputer General Public License, an open-source license developed for hardware."

Why The Open-Source Model Can Work In India

| Permalink

The following article (Why The Open-Source Model Can Work In India) presents and interesting viewpoint about the coexistence between propriety and open source software. Note the "j-factor" and "g-factor".

In fact, Phatak thinks U.S. programmers' open-source approach has changed the world. "Americans may not realize this, but the [general public license] is one of their greatest contributions to the world," he says, explaining that the GPL allows open-source software to coexist with proprietary software.

He considers the coexistence crucial. "The whole world can't depend on open source," the scholar acknowledges. Moving forward, the software world will consist of both those who develop proprietary code and those who develop open-source code. The success of this model depends upon two things--what he calls the "g-factor" and the "j-factor."

"Proprietary vendors should avoid the g-factor and not become too greedy, otherwise people will choose open source," Phatak says. "And open-source developers should avoid the j-factor and not become jealous that someone else might be profiting from their work. They should be delighted that people are using it."

technology doesn't make moral choices, humans do

| Permalink

From Judges leave technology's moral choices to humans:

Excerpt:
The court's decision doesn't condone the theft of copyrighted material. That is wrong and will always remain so. Peer-to- peer networks have other uses, however, particularly for the many lesser-known bands, artists and filmmakers that embrace file-sharing for its distribution power.

The court's ruling rightfully recognizes that technology doesn't make moral choices, humans do.

Searching for work: The challenges and concerns

| Permalink | 5 Comments
Current situation
As I have mentioned in my previous blog entry regarding my current status as a Ph.D. student, I just finished my coursework towards my doctoral studies in Information Science (minor in Media Studies) at SCILS – Rutgers University. My plan going forward is to start a full time job and continue working on my qualifying exam and dissertation on part time basis. I would love to concentrate full time on the rest of my Ph.D. studies, however, my personal situation does not give me the luxury to do so.
Thus, by April of this year, just before the semester was about to finish, I started looking very actively for a full time job (including here consulting / contracting / or short term projects). It is almost the end of the summer and I'm still looking for a job. I've had a number of interviews but was not prepared to hear the reasons why I didn't get the jobs that I thought and still think I was a good match.

In continuation, I will try to describe my experiences with the search process, and would appreciate any comments from fellow readers with similar experiences and challenges, as well as those that care to share an advice or point to a resource that can be helpful to individuals in similar situations.

The two sides that ought to meet (and do meet) – but recruiters and HR staff don't seem to see how, where, what and why
Before I continue, here is a short description of my industry experience as well my recent academic training as part of my Ph.D. studies. I have extensive experience as information systems analyst / engineer / architect, working with various systems primarily in the telecommunication industry [more detailed description]. My deliverables usually have consisted of specifications and requirements writing in the form of requirement documents used as inputs to the developing and testing teams, as well as architecture slides. In addition, the systems analyst position most often requires one to act as a facilitator between the business / user side and the technical side of the product lifecycle development. This role as a facilitator usually requires the understanding of the 'big picture' in order to better asses the feasibility and deliver-ability of a product or subcomponents (features, functionalities) in line with the business needs and tasks. As far as my academic experience is concerned, my interests have evolved around the interplay between information systems/structures and the social structures within which information systems are embedded. More specifically, I'm interested in digital libraries, system design, open source software, actor-network theory, the concept of openness, the social construction of IT and IS, etc.

The challenges
I believe there is a tremendous and unique value in the conjecture of the type of industry experience I posses and the type of academic training I have recently gone through as part of my doctoral coursework. One would think that HR staff, and recruiting and consulting companies would be able to see the advantage and be able to leverage such experience coupled with theoretical / academic knowledge. Unfortunately, this has not been the case. What follows are some specific experiences and challenges I have faced in the job searching process:
  1. Overqualification / underqualification. The most frequent comment I hear back from recruiters and HR folks is that I'm overqualified for the types of jobs I used to do before (i.e. Systems analysts / eng / architect), or that I'm not ready yet for the type of jobs that require an earned Ph.D. While I can understand the 'not ready yet' argument and it does make sense since I'm not done with my Ph.D. yet, it is hard to fathom that more education and more knowledge would be a barrier to finding a job, especially when this education is very closely related to the previous industry experience. I've tried to make sense and it could be argued that companies are afraid to loose individuals who aim at getting their doctoral degrees; this would make sense if one is looking for a permanent position within a firm. But, shouldn't the companies be less concerned with retention if the job is consulting / contract position?

  2. The wrong time to be looking for job. The additional challenge in my case I think is that this is the wrong time to be looking for job since the economy is not what used to be few years ago, and this is further made harder by the fact that HR folks consider this type of change as career change. Even though I don't see it that way.

  3. Must have exact experience. Moving to another industry and away from telecommunication in order to expand the possibilities is a real challenge. Companies seem to want an exact experience in every sense of the word, including here relevant industry experience. Looking at current job requirements makes you wonder who are they writing those requirements for. Most of the times it is not possible to get everything in one individual. I guess companies can do that nowadays considering the number of people looking for work. It is a managers job market.

  4. Recruiters and placement agencies unable to link my industry experience and my academic training. More than often HR folks concentrate on my working experiences as a systems analyst forgetting the value I bring to the table I have gained through my coursework as part of my doctoral studies. At the end of my resume I list all of my courses. The following courses (Human Information Behavior, Experiment and Evaluation in Information Systems, Quantitative Research Methods, Qualitative Research Methods, Towards a model of open source digital library system) are directly relevant to the type of work I have done during my industry experience, and yet recruiters and HR folks don't seem to make the link.

  5. Lack of Information Science / Studies job searching websites and tools. What are the online resources one can turn to for help? Many sites related to LIS type of job search list library jobs and lack substantial listings of Information Science / Studies related job openings.

  6. Lack of recruiting and placement agencies who are specialized in placing professionals with Information Science / Studies backgrounds. There is a lack of recruiting and placement agencies that can understand the potential an Information Science / Studies professional brings to the table. Few agencies I have spoken to have been helpful, but even they are not able to properly articulate the benefits an Information Studies professional can bring to a company.

  7. The industry appears blind to the knowledge and potentials that Information Studies / Science professionals can bring to the table. As I have mentioned above, the lack of helpful tools and recruiters that understand what Information Science / Studies education can do is perhaps directly related to what appears to be a blind spot in the industry as far as the abilities and expertise of an Information Science / Studies professional are concerned. More specifically, the people (interviewers, managers, recruiters, etc.) I have spoken do not seem very aware that system design strategies can be enhanced through various Human Information Behavior studies and thus yield better systems in the long run. This is even more true for those systems that directly interface with people and other social structures in the work place. An effort should be undertaken by Information Science / Studies school and departments to establish connections between theory and practice, between theoretical knowledge and how it can be utilized in practice.

Needless to say, the above experiences are those that I have encountered, limited by the information and resources available to me and performed by my previous experiences, my understanding of the current job market, my (un)luck with finding the right recruiters and placement agencies that understand the value of information Science / Studies doctoral education, and the limited knowledge of how all things should work especially in such tough and volatile times.

Why I wrote this entry?
In order to help, share and learn from each other, I would love to hear from others that are in my position or face similar challenges, especially those that have previous industry experience and are currently pursuing a doctorate degree in Information Science / Studies. Even better if I hear from potential employers; here is my resume. :)

(Update on 9/19/2004: I have accepted an offer and will be starting work in few days.)



Fewer students major in tech reports on the declining number of students entering and graduating in IT related degrees, including here information science/studies.

"In the University of Pittsburgh's information science program, which combines the study of information technology and how people use it, the number of students majoring has dropped to 200 for this school year, said Bob Perkoski, IS undergraduate program director. Last year, 229 students were majoring in IS and the year before, 260, Mr. Perkoski said."

It is interesting to see the effect of the declining graduates in the field of information science/studies with the ever increasing utilization of information technology around us. This isn't to say that information science/studies professionals are the only graduates/experts that can elucidate the interplay of IT and IS and the social structures within which they are embedded. However, who else is better positioned to study and explicate these relations? Computers science/engineering graduates traditionally have concentrated more on the technology rather than its social significance and implications. On the other side, social sciences do not emphasize enough on the technology as an important determining actor in the complex web of socio-technological interconnections.

Nevertheless, the decline might not have any immediate effects in real life due to the fact that in practice it is rarely recognized that information science/studies graduates are the best positioned to deal with the interplay of IT/IS and the relevant social structures.

paper superior to digital technology for archiving

| Permalink

From "Digital Information Will Never Survive by Accident”:

"Beagrie: In the right conditions papyrus or paper can survive by accident or through benign neglect for centuries or in the case of the Dead Sea Scrolls for thousands of years. It takes hundreds of years for languages and handwriting to evolve to the point where only a few specialists can read them.
...
In contrast, digital information will never survive and remain accessible by accident: it requires ongoing active management. The information and the ability to read it can be lost in a few years. Storage media such as paper tape, floppy disks, CD-ROM, DVD evolve and fall out of use rapidly. Digital storage media have relatively short archival life-spans compared to other media. As the volumes, heterogeneity, and complexity of digital information grows this requirement for active management becomes more challenging and more critical to a wider range of organisations."

I already have a problem reading/opening some papers/files that I wrote during my undergrad studies using WordStar (or something similar) in a school computer lab.

Justice is served! Court: Grokster, StreamCast Not Liable

| Permalink

From Court: Grokster, StreamCast Not Liable:

"SAN FRANCISCO - Grokster Ltd. and StreamCast Networks Inc. are not legally responsible for the swapping of copyright content through their file-sharing software, a federal appeals court ruled Thursday in a blow to movie studios and record labels.
...
The panel noted that the software companies simply provided software for individual users to share information over the Internet, regardless of whether that shared information was copyrighted.
...
"The technology has numerous other uses, significantly reducing the distribution costs of public domain and permissively shared art and speech, as well as reducing the centralized control of that distribution," Thomas wrote"

Finally, justice is served!

mind-mapping tool ... use of ANT apparent

| Permalink

A brief overview of Mayomi (an Online mind-mapping tool and community) reveals that ANT can be used to analyze and trace the connections between various elements/actors. As it can be observed from the first page, the elements are human and non-human, some task oriented, others action oriented, as well as social and information structures, etc, making it a good fit to be analyzed by the ANT framework, unless they were designed and developed based on the ANT framework and methodology. Hope to write more about this once I use the tool. A similar tool is FreeMind which I just installed.

Who benefits from the digital divide? is a very informative article regarding the digital divide discourse. One would think that such discourse arises with the aim to help the people on the have nots side of the digital divide, by closing the digital divide gap. In this article for First Monday Brendan Luyt shows that the people on the negative side of digital divide are surely NOT the people benefiting from the discourse.

"In this article I have described four groups that have an interest in the promotion of the digital divide issue. Information capital achieves a new market for its products as well as an educated workforce capable of producing those products in the first place. The state in the South benefits through the legitimation conferred through programs designed to combat the divide. Not only do these offer new accumulation opportunities for its elite, they also hold the possibility of defusing discontent over poor economic prospects for the middle class, a volatile section of the population. The development industry, suffering from a neo–liberal attack that views development as irrelevant in the modern world, also benefits from the digital divide. Another gap has been opened up that requires the expertise these agencies believe they can provide. And finally, the organs of civil society are also winners, as they attempt to capture information and communication technologies for their own increasingly successful projects."

Paradoxically, the digital divide discourse does not appear to be helping those it is supposed to help.

In The 'digital divide' and the rest of the population & the digital divide: more than a technological issue I have tried to show that the digital divide discourse might even further increase the existing digital divide gap.

Culture of secrecy hinders Africa's information society covers few interesting ways the mobile telephone technology is being used in Africa. It is evident in the article that the use of mobile technology is being redefined and continually socially constructed by the social and monetary resourced available.

Among the other interesting paragraphs, this one is really revealing:

"The worst thing is that it is a short step from a culture of withholding information to that of becoming information-blind. In other words, when we keep on withholding information, we end up being unable to produce information. We lose the culture of surveying, assessing, classifying – in brief, collecting as much information as possible and storing it in a standardized manner, making it available for use, not only to cater for current specific needs, but also for potential and future ones."

Along the lines of this article's argument, it can also be explained why text messaging is lagging in the US behind Europe and Asia. Most cell/mobile phone service plans in the US come with certain amount of 'free' minutes included in the plan. So, if you have free minutes to use, you use them first before sending any text messages, but also because the mobile telephone devices in the US market are less 'text messaging' friendly. In contrast, in Europe you pay for each minute you talk, and you use text messaging because it is cheaper than talking; thus the social co-construction of the mobile telephony service and the technology, and its use.

open source for hardware

| Permalink

The following article Try open source for hardware is a clear explanation of the potential benefit of implementing open source to hardware. While we see the open source hardware implemented in various PC technologies (via open protocols and open standard interfaces), the printer and printing industry is not there yet. The article clearly articulates the benefit to the consumers if printer cartridges are made standard across various vendors. It should drive the crazy cartridge prices down.

Rights Management and Digital Library Requirements

| Permalink

From Rights Management and Digital Library Requirements:

Introduction

It is common to hear members of the digital library community debating the relative merits of the two most common rights expression languages (RELs) - the Open Digital Rights Language (ODRL) and the rights language developed for the Motion Picture Expert Group (MPEG) and recently adopted by the International Organization for Standardization [1] - and which is preferable for digital library systems. Such debates are, in my opinion, premature and should be postponed until this community has developed a clear set of requirements for rights management in its environment, including rights expression, the encoding of license terms, and file protection.

This article is intended to provoke discussion of those requirements, and it attempts to do so by illustrating aspects of the current developments in rights management that may be problematic for digital libraries. This does not mean that the digital library community will need to develop its own rights language and rights management solution, separate from the existing standards in this area. It means that at this moment in time we do not have sufficient information about our own rights management needs to evaluate any particular solution nor to negotiate for extensions to accommodate digital library functionality.

States Warn File-Sharing Networks quotes attorneys general of 40 US states as saying:

"In a letter to the heads of Kazaa, Grokster, BearShare, Blubster, eDonkey2000, LimeWire and Streamcast Networks, the attorneys general write that peer-to-peer (P2P) software "has too many times been hijacked by those who use it for illegal purposes to which the vast majority of our consumers do not wish to be exposed.""

There is no doubt that P2P networks are perhaps used for the distribution of copyrighted material. However, the problem with the argument that they could be shut because they are also used to distribute copyrighted material stands on shaky grounds.

Here are some issues with the argument:
- Why stop with the P2P Networks and P2P software? How about the Internet as the enabler of the P2P activities?
- P2P activities are also used by independent artists and other activist to distribute various materials without any copyright infringements
- Nobody seems to have a problem with physical CDs, video tapes, DVDs and other carrier technology (including roads and highways) as an enablers to carry content (copyrighted or otherwise) from point A to point B.

So, the issues on how to deal with the distribution of copyrighted materials should be looked from a different perspective. I think it is more of a social issue rather than technology. The P2P technology is an innovative way for content distribution and it will be very sad if it is destroyed because some people decide to use it in a manner contrary to the pertinent laws.

ph.d. status update (Summer 2004)

| Permalink

Well, with the finishing of this past semester (i.e. Spring 2004) I completed the required coursework for my Ph.D. in Information Science (with minor in Media Studies).

Right now I'm concentrating my efforts towards getting ready for my qualifying exam and trying to concentrate further and deeper into the issues I like to treat in my dissertation (obviously the proposal comes first:)).

Not sure if I'll be taking the qualifying exam this coming semester (Fall 2004). If not, then my plans are to take the exam in early Spring 2005.

In the meantime, a lots of reading, reflecting, mental summarizations, re-reading of article that I read early in my coursework, making new connections between theory and practice, further indulging and trying to understand the explicit and implicit theories and frameworks that guide my thinking and research pertinent to my Ph.D. studies, etc.

In P2P TV - How Independent News Video Producers Will Bypass The Mainstream TV Networks Robin Good brings forth an interesting and almost self evident argument about the potential effect of P2P TV to empower the masses by bypassing the mainstream TV networks.

To further support this position, here are some thoughts build upon Gitlin's (1980), Schiller's (1996), Streeter's (1996) and Fiske's (1996) arguments, emphasizing open communication (i.e. many-to-many) is the liberating technology from the central grip in the way this have been setup so far.

Evident from Gitlin’s and Schiller’s arguments is their emphasis on the necessity of free and open communication among the masses if there is to be any deliverance from the ‘claws’ of the media. On the contrary, it is the one-way communication (radio, TV, cable) utilized by the elites to achieve the subordination and dissemination of the hegemonic ideology. Fiske’s technologised surveillance of the physical goes hand-in-hand with surveillance of the discourse (what issues are raised on TV, radio, etc.) “because unequal access to those technologies ensures their use in promoting similar power-block interests" (Fiske 1996, p. 218). The important point brought forth here, directly or indirectly, is the identification of the closed, unidirectional (with masses on the receiving end) and restricted access of communication technology.

These aspects are identified as necessary characteristics for the maintenance and reproduction of the hegemonic ideology, enabling the elites to set the form, format and content of the public discourse (broadcasting, TV, radio, press, etc.), and as importantly decide who can participate. Therefore, it can be argued that this manifestation of communication technologies, entangled in the web of one-way communication and used by the elites for power control and dissemination of material in support of the hegemonic ideology, has shaped the traditional scholarly and public discourse, as well as their practical use, to view communication technology as intrinsically embedded with features, characteristics and functionalities, for reinforcing and aiding the hegemonic ideology.

This biased view, that communication technologies are inherently suited to help media control, is troublesome and factually wrong. For example, the scholarly and public discourse on early cable technology shows that cable access was intended for use unlike it is being used today (for dissemination popular consumer culture through its various formats with the aims of making profit). Streeter (1997) argues that cable "had the potential to rehumanize a dehumanized society, to eliminate the existing bureaucratic restrictions of government regulation common to the industrial world, and to empower the currently powerless public" (Streeter 1997, p.228). He further notes that the cable system had the potential to enable two-way communication and interactivity, but apparently failed to do so due to the social (un)response on the part of the audience: "Cable television was something that could have an important impact upon society, and it thus called for a response on the part of society; it was something to which society could respond and act upon, but that was itself outside society” (Streeter 1997, p. 225). And then adds that cable should not be viewed as an “autonomous entity that had simply appeared on the scene as the result of scientific and technical research" (Streeter 1997, p. 225). Here we see a distinction between the current social status of cable as profit making machinery and its potentials to have become socially responsible technology that would have empowered the audience with two-way open communication.

Refs:
Fiske, J. (1996). Media matters: Race and Gender in U.S. Politics. Minneapolis: University of Minnesota Press

Gitlin, T. (1980). Chapter 10, “ Media Routines and Political Crises.” In Gitlin, The Whole World is Watching (pp. 249-269). Berkeley: University of California Press.

Schiller, H.I. (1996). Information Inequality: The Deepening Social Crisis in America. New York - London: Routledge

Streeter, T. (1996). Selling The Air: A Critique of the Policy of Commercial Broadcasting in the United States. Chicago: University of Chicago press

quantum information science

| Permalink

The following article Rules for a Complex Quantum World: An exciting new fundamental discipline of research combines information science and quantum mechanics presents a fundamental new way of looking at information science. As a framework in making, it builds upon Shannon's information theory and Buckland's "information-as-thing", as well as quantum physics. It appears that this approach is closer to physics than the contemporary information science studies that deal primarily with information from the meaning making viewpoint.

Could this pave the way for the groundwork towards the unified theory of information?

The 'digital divide' and the rest of the population

| Permalink

It seems as if the discourse regarding the reduction or the elimination of the 'digital divide' gap has become a fashion and a trend of a sort. Apparent from the discourse and various tasks aimed at narrowing the gap of the digitally haves and have nots are the forgotten ones, the portion of the population in any society (country, region, etc.) that will probably never get online for variety of reasons.

While the aim of the Maltan government is a genuine one as expressed in the following article (New IT strategy launched to eliminate digital division) with the necessarily inclusion of relevant civic organizations alongside government and corporate organizations: "The Prime Minister and Minister explained that this strategy came about through a wide process of consultation following the setting up of National Council for Information Society (NISCO) which is made up of the governments, unions, political parties, members of civic society and industrial organizations and technology", there is a real concern that the digital divide gap might increase even further by shifting all the efforts towards the 'digital realm' by reducing the attention in the 'non digital realm'.

Considering that a portion of the population will never catch the digital train, an ever emphasis of the 'digital realm' will disenfranchise great many people. It is all well to want everyone on the digital train, serving the public might become more efficient. However, it should not be forgotten that many people will not catch the digital train in their lifetime and they should not suffer because of that. Imagine going to a government office and they tell you that you have to navigate a complex computerized menu systems to obtain certain information, and you have never touched a computer in your lifetime, or you only know how to send e-mail?

Hidden costs of open source

| Permalink

Upon reading Hidden costs of open source one starts wondering as to what are the 'hidden costs' the article insinuates? The author suggests that the cost associated with learning how to use (install, maintain, and run) a particular software is a hidden cost.

"There we are. Cost again. If it's so easy to use and it is reliable (one assumes it's reliable since apparently Nasa is using it to run mission critical applications, although that would put me off becoming an astronaut), why am I asked to shell out $1,500 for entry-level support? And support costs can go as high as $62,400 - hardly a cheap option."

But this is nothing new with either commercial packages or open source software. Using any software that is complicated requires learning and maintaining, independently if it is closed or open source. The expense of learning and maintenance hardly classifies as 'hidden cost'. And guess what, you don't have to by the support from the actual developers of the open source. You can learn it on your own and do it yourself, or hire other competitive training and support consultants. Sometimes you wonder why this article is even published as a serious discussion point. Hmm…

the social construction of Unix, C, and Linux

| Permalink

From Unix's founding fathers:

"It is that interplay between the technical and the social that gives both C and Unix their legendary status. Programmers love them because they are powerful, and they are powerful because programmers love them. David Gelernter, a computer scientist at Yale, perhaps put it best when he said, “Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defence against complexity.” Dr Ritchie's creations are indeed beautiful examples of that most modern of art forms."

My emphasis in bold; couldn't have said it better. After all, we knew that coders and programmers are not "lone scientists". :)

finding open source code

| Permalink

From IST Results - Swift searching for open source:

Excerpt:
Finding the open source code you need can often seem like searching for a needle in a haystack. But with the development of the AMOS search engine finding your way through today’s maze of software code has just become considerably easier.
Aimed at programmers and system integrators but with the potential to be used by a broader public, the AMOS system applies a simple ontology and a dictionary of potential search terms to find software code, packages of code and code artefacts rapidly and efficiently. In turn it assists open source program development through making the building blocks of applications easier to find and re-use.

introducing the Common Information Environment

| Permalink

From Towards the Digital Aquifer: introducing the Common Information Environment:

Excerpts:
Google [1] is great. Personally, I use it every day, and it is undeniably extremely good at finding stuff in the largely unstructured chaos that is the public Web. However, like most tools, Google cannot do everything. Faced with a focussed request to retrieve richly structured information such as that to be found in the databases of our Memory Institutions [2], hospitals, schools, colleges or universities, Google and others among the current generation of Internet search engines struggle. What little information they manage to retrieve from these repositories is buried among thousands or millions of hits from sources with widely varying degrees of accuracy, authority, relevance and appropriateness.
...
This is the problem area in which many organisations find themselves, and there is a growing recognition that the problems are bigger than any one organisation or sector, and that the best solutions will be collaborative and cross-cutting; that they will be common and shared. The Common Information Environment (CIE) [3] is the umbrella under which a growing number of organisations are working towards a shared understanding and shared solutions.

socio-political and economical twist to open source

| Permalink

Personal view: Open source may be next business revolution reviews the new book "The Success of Open Source" by Stevens Weber, a professor of political science at the University of California at Berkeley.

Have not read this book yet, but it seems like interesting reading from this article. Here are some excerpts:

"His claim, and it's a bold one, is that this isn't just a good way of developing software, it's a new way of organising businesses. Open-source software breaks the links between developing a product and owning a product, which is the way business has traditionally organised itself. That could have startling consequences.
It's rare to find a professor of politics discussing software. "People in academic subjects are very conservative about their disciplines," Weber says. "So people are intrigued, but also a little bit nervous about an approach like this."

"Think back to the invention of the steam engine. By the standards of the time, building a railway was so complicated and so costly that none of the existing organisational forms could handle it. So the joint-stock company and the stock exchange rose to prominence. Something similar may be happening now."

accessing the "collective intelligence"

| Permalink

Commenting on George Por's article, Steven Cohen discusses the value of blogging and other tools supporting collaboration in building a collective intelligence.

While we have many blogging and other social software tools that enable the 'creation' of the collective, how do we harness the "collective intelligence" once it is 'there'/'built'? It would seem that other tools would be needed to enable quick and relevant utilization of the collective intelligence. So far, it appears that the blogging tools have done a great job enabling the representation of the collective intelligence. They lack the function as enablers for utilizing the available collective knowledge.

It seems that the next wave of social network and collaboration tools will/should concentrate more on the function of finding relevant and appropriate 'intelligence' somewhere in the collective pool. Needless to say that search engines are not best suited for this type of activity since they concentrate primarily on topical relevance and do little to nothing about spatial, temporal, methodological, contextual, process, and task specific relevance.

Alan Kay's food for thought regarding personal computing

| Permalink

Alan Kay's food for thought as reported in A PC Pioneer Decries the State of Computing, regarding personal computing:

But I was struck most by how much he thinks we haven't yet done. "We're running on fumes technologically today," he says. "The sad truth is that 20 years or so of commercialization have almost completely missed the point of what personal computing is about."

But what about all those great things he invented? Aren't we getting any mileage from all that? Not nearly enough, Kay believes. For him, computers should be tools for creativity and learning, and they are falling short. At Xerox PARC the aim of much of Kay's research was to develop systems to aid in education. But business, instead, has been the primary user of personal computers since their invention. And business, he says, "is basically not interested in creative uses for computers."

Note the emphasis that computers could/should have been used more for creative process and learning. The potential is there, however, the social construction of the computing technologies has been mostly lead by commercial goals. Thus, the interplay of computing technology and social structures has mostly served commercial interest and less so with the potential of creativity, inventions and innovation.

The question arises then how to get to more creative use of technology for learning and novel ways of innovations? Open source computing perhaps, where computing tools geared more towards learning that act as stimuli for creative innovation. But then, anything creative that can make money is imprisoned within the commercial realm and looses it potential for learning and creativity. A way needs to be found such that creativity is left to bloom within its realm free from commercialization. Proprietary software (due to being in closed environment) is responsible for slowing down innovation and creativity. I would say: the way is towards open computing …

The Role of Children in the Design of New Technology

| Permalink

The Role of Children in the Design of New Technology

Abstract:
Children play games, chat with friends, tell stories, study history or math, and today this can all be done supported by new technologies. From the Internet to multimedia authoring tools, technology is changing the way children live and learn. As these new technologies become ever more critical to our children’s lives, we need to be sure these technologies support children in ways that make sense for them as young learners, explorers, and avid technology users. This may seem of obvious importance, because for almost 20 years the HCI community has pursued new ways to understand users of technology. However, with children as users, it has been difficult to bring them into the design process. Children go to school for most of their days; there are existing power structures, biases, and assumptions between adults and children to get beyond; and children, especially young ones have difficulty in verbalizing their thoughts. For all of these reasons, a child’s role in the design of new technology has historically been minimized. Based upon a survey of the literature and my own research experiences with children, this paper defines a framework for understanding the various roles children can have in the design process, and how these roles can impact technologies that are created.
(Full Paper in PDF)

open access a danger to professional societies?

| Permalink | 1 Comment

This is a follow-up to my previous entry (A shift in scholarly attention? From commercial publishing to open access publishing) prompted by Open Access? Some Sparks Fly at ALA. (thanks to Open Access News).

In the article, IEEE's Durniak makes the following unsubstantiated statement: "Free open access runs the risk of destroying professional societies."

One can do an extensive analysis to show that the above statement is not necessarily true. However, it suffices to note that commercial publishers are only one of the actors in the scholarly publishing cycle. As such, the totality of the functions performed by the commercial publishers can definitely be taken over by the professional societies themselves, or perhaps by a non-profit umbrella organization that would deal with scholarly publishing for various professional societies.

It is really unprecedented and uncalled for the commercial publishers to claim that without them the entire scholarly publication process will fail and that professional societies will be destroyed. It is indeed true that the commercial publishers provide value added services. However, none of these value-added services are outside of the competency of the professional societies themselves, especially with all the open source software available. Even if it means that the processional societies would have to hire IT staff to deal with the maintenance of the process, it would definitely be less costly than the cost to the host institution for buying back the intellectual output of their staff.

Sooner or later, the commercial publishers will have to relax a bit and see how they can honestly contribute in the process to moving to open access. Their stakeholders might not be happy, but, hey, the dynamic is changing and the power base is shifting.

Can it ever get more clearer than this argument why the publishing of scholarly work should not be in the hands of commercial entities? From A Quiet Revolt Puts Costly Journals on Web:

"Elsevier doesn't write a single article," said Dr. Lawrence H. Pitts, a neurosurgeon at the University of California at San Francisco and chairman of the faculty senate of the 10-campus system. "Faculty write the articles for them, faculty review the articles for them and faculty mostly edit the journals for them, and then we get to buy the journals back from a company that makes a very large profit."

It appears that the players in the process of scholarly publishing (scholars, editors, publishers, etc.) are well aware that the current (i.e. commercial publishing) process will not be sustainable for long. Fueled by the openness of the Internet, scholars and academics have the necessary technology and expertise to publish without the involvement of commercial entities. The money that today is eaten as profit by the commercial entities can definitely be used for further research and academic pursuits.

In the process of the inevitable move from commercial publishing to open access, undoubtfully the entire dynamic of the publishing process will change. But change is not bad. A lot of realignments will occur. The moment established scholars start publishing in open access publications, the tide will turn.

Or, if there is resistance, a shift in the problems addressed by a certain filed or a discipline might occur towards those addressed in the open access journals due to their wider distribution and open access. It would appear then that the move towards open access publishing might even realign the types of problems addressed by a certain scholarly community.

An important analysis in this respect is presented by Kling et al. It suggests that the medium of information transfers and exchange (paper vs. electronic) might induce a shift in the scholarly discourse of a particular discipline. They argue that the highest status scientists usually publish in well-established journals that at the same time usually define the scope and the problems of the field (Kling et al., p.10). Then, the scientists and scholars with a status just a little under the scholars of the highest status are likely to publish in an e-journal (usually open access) due to its speed of distribution and perhaps visibility due to very large readership (Kling et al., p.10). What this could do is that if enough second tier scientist start publishing in e-journals sooner or later the interests and the problems treated in those e-journals for a particular discipline might shift away from the problems treated in the paper journals, due to the speed of distribution, while gaining legitimacy and perception of good quality. This would also mean that the medium is the message (in McLuhan’s sense) where the medium appears to shift the scholarly discourse of a field/discipline.

Kling, R. and Covi, L. M. (1995). Electronic Journals and Legitimate Media in the Systems of Scholarly Communication, The Information Society, 11 (4) 261-271 (Accessed at: http://www.slis.indiana.edu/TIS/articles/klingej2.html)

E-voting: Nightmare or actual democracy?

| Permalink

The public domain discourse surrounding e-voting is very perplexing. Similarly to other articles, E-voting: Nightmare or nirvana? questions the security of e-voting systems and their viability for use in real elections.

"Once the province of a small group of election officials and equipment sellers, e-voting has exploded into the popular consciousness because of a spreading controversy over security and verifiability. Thanks to a concerted effort by opponents and to the missteps of voting machine vendor Diebold Election Systems, most of the news has been bad."

I have said this before in a previous entry (secure enough for consumerism, not good enough for voting?!) and here it is again: How is it that we can't trust e-voting security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?

Secondly, the missteps by Diebold Election Systems that produces e-voting machines are curable by the use of open source e-voting systems that are already in use in other places around the world.

Yes, there are potential problems with e-voting systems. These are the same issues that trouble all new technologies in the appropriation phase by the users. However, to claim that these issues are worse than those that troubled and still trouble e-commerce systems is absurd.

From Open access jeopardizes academic publishers, Reed chief warns:

"The rise of open access publishing of scientific research could jeopardise the entire academic publishing industry, according to the chief executive of Reed Elsevier, the world's largest publisher of scientific journals."

Something will be jeopardized for certain, but it isn't the academic publishing, it is the commercial publishing. As many open access journals and publishing venues have shown, academic publishing does not have to be commercial publishing.

What is Shareability [of information] theory?

| Permalink

By the way of Column Two I came across the following site that defines Shareability of information. Here is the definition provided at the above site:

"Shareability refers to the extent to which information is shareable. Information has high shareability if it is easy to share between different individuals without loss of fidelity. Shareability theory (Freyd 1983, 1990, 1993) proposes that internal (e.g. perceptual, emotional, imagistic) information often is qualitatively different from external (e.g. spoken, written) information, and that such internal information is often not particularly shareable. The theory further proposes that the communication process has predictable and systematic effects on the nature of the information representation such that sharing information over time causes knowledge to be re-organized into more consciously available, categorical, and discrete forms of representation, which are more shareable."

The distinction made above between internal and external information sounds almost exactly like the distinction made in the Knowledge Management (KM) discourse between tacit and explicit knowledge. Furthermore, the definition does not seem to make distinction between information and knowledge, even though such distinction appears to be very relevant in the context of the definition.

Another concern that might further help the above definition or the theory of shareability is to note that it is not the knowledge that is organizable, rather, it is mostly the representations of the explicit knowledge, and to a much lesser degree the representations of the tacit knowledge (if at all).

And one more thing, in the spirit of the concept of openness, for something to be shared it first must be open to change (open content) and the access to it must be also open.

Social Issues Surround Social Software

| Permalink | 2 TrackBacks

From Social Issues Surround Social Software:

"While the answer may be elusive, panelists at the Supernova 2004 conference here agreed that the social dynamics around the use of burgeoning collaboration tools such as online social networking services, Weblogs and wikis are often as important as, if not more important than, the technologies themselves."

I would like to make one corrections to the above quote: it isn't that social dynamics (and social structures) are often as important, they are always as important if not more important. And this isn't true only for social software and collaboration tools, but it is true for all types of interactive information and communication systems, and technology in general. The technology meant to aid people's tasks is meant to be used by people in various contexts. As such, the technology by itself cannot deliver the sought after results. It is the interaction between the technology and the human factors in a given social structures and context, including the properties of the task, that hopefully results in the desired outcomes.

Once and for all we need to get over the irrational idea that social structures, human actions, and tasks can be bent to fit the technology. Yes, they can, but don't expect the desired results...

As I was reading Wired's article on apolitically encouraging people to vote in the 2004 American presidential election, I kept wondering about media's role in this process. It is interesting to note that all national networks and cable channels cover the presidential elections to a great extend through various debates and candidate coverage’s.

Sadly though, none of the networks and cable channels tries to drive voter registration so we have more voters performing their civic duties. I can't imagine anything wrong with having 70-80% or more of the eligible voters cast their votes.

So, how come then, we do not see an initiative for voter registration by the media?

Is it that the percentage of eligible voters that actually cast their vote has not changed in the past few elections? Could it be that the media are afraid they would not be able to 'analyze' the polls and other statistics if the percentage of voting people doubles?

Bo-Christer Björk: Open access to scientific publications - an analysis of the barriers to change?:

Abstract:
"One of the effects of the Internet is that the dissemination of scientific publications in a few years has migrated to electronic formats. The basic business practices between libraries and publishers for selling and buying the content, however, have not changed much. In protest against the high subscription prices of mainstream publishers, scientists have started Open Access (OA) journals and e-print repositories, which distribute scientific information freely. Despite widespread agreement among academics that OA would be the optimal distribution mode for publicly financed research results, such channels still constitute only a marginal phenomenon in the global scholarly communication system. This paper discusses, in view of the experiences of the last ten years, the many barriers hindering a rapid proliferation of Open Access. The discussion is structured according to the main OA channels; peer-reviewed journals for primary publishing, subject-specific and institutional repositories for secondary parallel publishing. It also discusses the types of barriers, which can be classified as consisting of the legal framework, the information technology infrastructure, business models, indexing services and standards, the academic reward system, marketing, and critical mass."

Open Source as competitive Weapon

| Permalink

Note how in the passage below (from Open Source as Weapon) the argument is made that the competition soon will move away from the actual code (everyone would have access to the same software code) and into its usage and integration in a particular context.

Excerpt:
"Experts tick off compelling reasons why a vendor of closed-source software might release code: to make the product more ubiquitous, speed development, get fresh ideas from outside the company, to complement a core revenue stream, foster a new technology -- and to stymie a competitor.

In fact, giving away some free company IP can go a long way toward making someone else's IP worth beans.

Martin Fink, author of "The Business and Economics of Linux and Open Source," notes that, while all commercial software decreases in value over time, open source drastically speeds the process. The huge community of developers working together can produce a competitive open source product fast, and they'll add features for which a closed-source vendor would want to charge extra.

Finally, customers can acquire the software at no cost, even though they may pay for customization, integration and support."

BBC to Open Content Floodgates

| Permalink

BBC to Open Content Floodgates:

Excerpt:
"The British Broadcasting Corporation's Creative Archive, one of the most ambitious free digital content projects to date, is set to launch this fall with thousands of three-minute clips of nature programming. The effort could goad other organizations to share their professionally produced content with Web users.

The project, announced last year, will make thousands of audio and video clips available to the public for noncommercial viewing, sharing and editing. It will debut with natural-history programming, including clips that focus on plants, animals and birds."

SEMANTIC WEB DRAWS ON THE POWER OF FRIENDS

| Permalink

(via ShelfLife, No. 160 (June 10 2004))
SEMANTIC WEB DRAWS ON THE POWER OF FRIENDS
"Do a little digging into the status of the Semantic Web, and you'd likely come away befuddled and unenlightened, convinced this was a job for techno-geeks, not actual human beings. But in point of fact, the burgeoning number of Weblogs already form a vast source of richly interconnected information that requires little or no knowledge of the Semantic Web in order to be useful. The new Friend Of A Friend (FOAF) project is taking the idea of Weblog communities one step further by explicitly defining them in a way that is more easily machine processible. One of the aims of the FOAF project is to improve the chances of happy accidents by describing the connections between people (and the things that they care about such as documents and places). The idea is to use FOAF to describe the sorts of things you would put on your homepage -- your friends, your interests, your picture -- in a structured fashion that machines find easy to process. What you get from this is a network of people instead of a network of Web pages. When people need to know something that is outside their area of expertise, these personal contacts serve as a way of linking them to the best information available. (FreePint 27 May 2004) http://www.freepint.com/issues/270504.htm#feature"

socio-technological definition of "digital library"

| Permalink

When discussing the subject of digital libraries (DLs), often the very definition and meaning of the phrase "digital library" is questioned. This is expected due to the historical, practical and theoretical development of digital libraries as technologies (computer and information systems) as well as social structures.

Below I provide two definitions by Borgman (1999) and Lesk (1997) that have been widely used by practitioners and researchers. Needles to say both definitions embody the technical and the social nature of digital libraries.

Borgman (1999) attempts to explicate the meaning and interpretation of the phrase "digital library" through the analysis of various definitions regarding "digital libraries" coined by various research and practice communities claming to be somehow related to digital libraries, and to assess and identify possible influences of those definitions in the relevant communities. Borgman identifies two distinct senses in which "digital library" has been used (p. 227). The technological definition stating that "digital libraries are a set of electronic resources and associated technical capabilities for creating, searching and using information" (p. 234), is contrasted by the social view stating that "digital libraries are constructed, collected and organized, by (and for) a community of users, and their functional capabilities support the information needs and uses of that community" (p. 234).

Another workable and widely used definition is provided by Lesk (1997): "Digital libraries are organized collections of digital information. They combine the structuring and gathering of information, which libraries and archives have always done, with the digital representation that computers have made possible" (p. XIX).

References :
Borgman, C. L. (1999). What are digital libraries? Competing visions. Information Processing & Management, 35 (3), 227-243.

Lesk, M. (1997). Practical digital libraries: Books, bytes and bucks. San Francisco, CA: Morgan Kaufmann

my comments on Thijs' Predictions

| Permalink

In Prediction Thijs van der Vossen has stated some ideas about how things will be in the future in terms of information and knowledge sharing.

While I agree that what Thij's writes is the desired outcome if we are doing towards a more open world, the outcome is not necessarily so. Yes, information needs to be free so it can be accessed from everywhere, by everyone, through many different devices and access methods. However, the assumption is that the corporate entities will be willing to let go the grip they have on everything information that looks profitable.

So, one of the fundamental assumptions is that all sources of information and knowledge artifacts really want to share their content. In the open source Internet as a possible antidote to corporate media hegemony I have argued that the property of openness (open content and open communication) as a fundamental property of the Internet as we know it today, is perhaps the reason why Thij's predictions look very probably. Hopefully no authoritative entity puts restrictions around what can be said and done online.

Papers on the Information (Commons) Society

| Permalink

Openness, Publication, and Scholarship

| Permalink

Openness, Publication, and Scholarship is an interesting philosophical perspective attempting to frame publications and scholarship within the various concepts of openness such as "open access", "open data", "open source", "open entry", and "open discourse".

To this I like to modify "open data" with "open content", since content has broader scope than data, and perhaps add "open communication" as the functional link between "open access" and "open discourse".

A Really Open Election - via open source

| Permalink

In A Really Open Election CLIVE THOMPSON makes his point that only open source e-voting systems can be trusted for elections.

I made the same argument back last year in the following blog entries: e-voting systems must be open source, and e-voting systems ought to be open source.

Well, at least many research institutions are realizing that the commercial publishers might not be the solution for the future of scholarly communication.

An excerpt from Fat Cat Publishers Breaking the System:

"Out-of-control costs for scholarly publications have fueled new digital repository initiatives

The scholarly publishing system is broken. At research universities everywhere, scholarly work—in the form of articles, books, editing, reviewing of manuscripts—is handed over to commercial publishers, only to be bought back by the libraries at huge cost. Libraries scramble to judiciously stretch shrinking budgets for growing runs of books and journals—books and journals that are critical to the research and teaching activities of the university’s faculty who, as authors and editors, contribute so generously to the publishers who sell them. The arrangement is bankrupting research library budgets and swelling the profit margins of commercial publishers.

Sadly, commercial publishing threatens the very system it exists to support. When expensive commercially published materials cannot be bought, when university presses cannot afford to publish monographs for junior faculty, everyone suffers. Students and scientists cannot gain access to badly needed materials; scholars cannot get tenure for lack of that first published monograph. The modern university, modeled on the ideal of the Greek temple where thinkers and learners pursued knowledge so that society could reap its benefits, is losing ground to crass commercialism. At risk is the very culture of the academy."

At last, there is a realization that information and communication technologies do not necessarily help the 'disadvantaged and vulnerable groups' by the way of some magic. Given that the tools of the economical development in most cases reflect the social structures within which they function, thus 'favoring' the people in 'power', a concentrated effort is needed to ensure that people less likely to 'magically' benefit from such advances do indeed rip the benefit.

The 'Technologies of a Digital World' conference/Expo seems to be an effort in the right direction. At least they are emphasizing that something other than 'magic' needs to be done.

"Technology is an enabler as well as a catalyst to ensure companies operate profitably and governments operate more efficiently in the global environment. But technology should also be the medium for people from all walks of life to harness the new opportunities offered by ICT, and act as fundamental elements for creating new skills and shaping mindsets to churn the engine of the knowledge-economy."
...
The Expo and Seminar, first of its kind to be held in Brunei, carries the theme, 'Technologies of a Digital World' and is centred on the development of technologies suited to the disadvantaged and vulnerable groups and the development of affordable technologies to facilitate people's access to ICT.

From Adam Smith to Open Source

| Permalink

From From Adam Smith to Open Source:

"The Internet is a manifestation of the validity of Adam Smith's theories, as is the growth of Linux, itself, Young argued. The way in which the Internet works and was created is as a distributed system to which multiple self-interests contributed. This resulted in something that was better than any one individual company or government could have ever created."

"Operating-system adoption is driven by the availability of applications, according to Young, which is something that, in early days of its existence, Linux did not have. That said, he added, it was the Internet, itself, with applications like the Apache Web Server, DNS and Sendmail -- all free and open source endeavors -- that serve as further proof of Adam Smith's theory is applied to the growth of the free and open source software movement. "

"The Internet was the killer app that drove the adoption of Linux," said Young."

No comments... the argument is self explanatory.

The qualitative study (Scacchi, 2002) I have selected to critique is published in an electrical engineering oriented scholarly peer-review journal. The author is aware of his quantitative oriented audience and thus from the very beginning sets the expectations that the study is “… not about hypothesis testing or testing the viability of a perspective software engineering methodology or notational form” (p. 24). Similarly to Lincoln and Guba (1985) in defining naturalistic inquiry in terms of what it is not, Scacchi deems it necessary to define a qualitative research in terms that it is not quantitative research. The tensions emerging from the struggle to present non-quantitative type study to a quantitative expecting audience are pervasive throughout the article. Because of these tensions, in the attempt not to alienate his audience, the author has either decided to take many shortcuts—showing in the lack of proper definition and utilization of qualitative methods; or, the author himself is in the process of becoming familiar with various qualitative methods. In the rest of this paper I will concentrate on these struggles, attempts, and what could have been done better, not forgetting that maybe what the author has done is a purposefully chosen middle ground because the audience was not prepared for the full switch from quantitative to qualitative methodology and methods.

The core of this article is to understand the nature and the processes around requirements for the development of open source software (Scacchi, p. 24). Since the open source development framework is a new approach to software development, the author rightfully suggests qualitative methods for doing so: “… investigation of the socio-technical processes, work practices and community forms found in the open source software development. The purpose of this investigation, over several years, is to develop narrative, semi-structured (i.e. hypertextual) and formal computational models of these processes, practices and community forms” (p. 24). The preceding quote also suggest a mix method approach where the findings of the qualitative part of the study (i.e. ‘investigation’) would inform the quantitative part in building computational models. However, this article is restricted to the investigative part of the effort.

the battle against spam goes on: spambayes vs. bogofilter

| Permalink

Note: this entry is part of my class project on experiment and systems evaluation. Only the introduction, limitations, and conclusion sections are included here. For the full paper please take a look at the pdf version.

Introduction

The purpose of this study is to evaluate the effectiveness of two spam filtering software packages in order to decide which one to recommend for further usage. Both Bogofilter (BF) and SpamBayes (SB) are based on the Bayesian probabilistic model (Baeza-Yates & Ribeiro-Neto, 1999, p. 48), as adapted and proposed by Graham (2002; 2003) for spam e-mail identification and filtering. The key to Graham’s Bayesian filtering technique is the ability to train the software with known spam and non-spam (i.e. good) e-mail messages on individual user basis. The idea is that with increased and continued training both packages will be more effective in identifying spam messages, while at the same time decreasing the number of false positives and false negatives.

Both BF and SB are open-source software packages available for free download and use. BF is available for Linux/Unix only and it can be integrated with other mail delivery and filtering tools to automatically tag and filter spam e-mail messages to a separate folder. SB is available for multiple operating system platforms and can be configured to work with Unix/Linux command line mail delivery and filtering tools, as well as POP3, IMAP, the Outlook e-mail client, etc. Detailed instructions are provided at http://bogofilter.sourceforge.net/ and http://spambayes.sourceforge.net/ respectively.

I have used both of these systems and would like to be able to answer the question as to which one of these two systems if more effective at identifying spam.

The Bayesian technique suggests that with continued training the software packages should become more effective in spam identification. Thus, the first research question:

RQ1: Does the spam filtering effectiveness of BF and SB improve as the amount of e-mail messages used for training increases?

From my personal experience it appears that SB is more effective than BF.
Although after good amount of training both SB and BF seem to be very effective in that I rarely get false positives in both of these implementations. Thus, the second research question:

RQ2: Is the spam filtering effectiveness of SB better than BF?

Full paper in pdf version.

Assumptions and limitations

For a more complete analysis of effectiveness, the experiment needs to be repeated with multiple corpuses provided by different individuals due to the uniqueness and various patterns of e-mail use between individuals. The pattern of e-mail messages imbedded in the set3000 corpus is defined by my personal e-mail communication as well as by e-mails I receive at aliases and forwarding e-mail addresses due to my involvement as moderator and administrator of various electronic news and discussion lists.

Additionally, the cap of 3000 messages in the corpus can be modified to see any potential variability in effectiveness. Although I believe that having 3000 messages tested for spam probability seems sufficiently large amount comparable to real life operational spam filtering systems.

The equal proportion of spam and good messages in the training sets might not resemble real life situations. The rate of spam messages received is much higher than good messages, at least in my case. Accounting for variable proportion would improve this experiment. This might even yield an optimal proportion for a given spam cutoff level.

In this experiment, the issue of performance was not considered. Apart from the effectiveness, if one of the systems is to be used for real time spam filtering on a Unix/Linux server supporting thousands of users, SB might be at disadvantage due to its implementation with the python language. BF on the other side is implemented with c++ and runs significantly faster.

Conclusion

Based on the above analyses, the following can be concluded:
• the spam filtering effectiveness of both SB and BF improves with the increased number of training messages
• at each training level SB is more effective than BF

Recommendation: In conjunction with the results on Figure 4 and Table 4 showing the amount of FP, FN, TP, and TN at 0.9 spam cutoff at different training levels, SB is more effective due to the significantly lower number of FN compared to BF. In order to minimize the training effort due to false negatives and false positives, it is recommended that once SB is installed for use, it should be trained with at least 200 or 400 (half good & half spam) messages. At this stage, the number of FP is zero. However, the spamming techniques change as fast (and even faster) in comparison with spam filtering packages. This is an ongoing battle and no software packages can identify all spam messages.

For example, my installation of SB rarely identifies false positives. But once in a while it does, especially when the pattern of spam messages changes all of a sudden or a virus with a unique behavior appears on the scene. I also expect to receive false positives when subscribing to a new discussion list, more so if it is in a different language or topic of interest different from the rest of the discussion lists I’m already subscribed.

The battle against spam will continue as long as spammers have incentives to send spam messages. Spam filtering systems are indeed helpful in reducing the false positives and false negatives. Both SB and BF seem to be designed to eliminate the false positives with as little training as possible. In any case, due diligence and patience is needed by the user. For better effectiveness the user should continuously train the system of choice. To aid in this process, both SB and BF allow for good cutoff level in addition to the spam cutoff level. The messages with spam probabilities between the good cutoff and spam cutoff can be filtered in a separate folder (usually called ‘unsure’) and trained appropriate.

actor-network theory or ANT ?

| Permalink | 2 Comments

One of the major issues with the actor-network methodology is that there is no ready to used steps/procedures on how to go about operationalizing the various actor-network related concepts. Many of the concepts are dispersed amongst the writings by Latour, Callon, Law, Bijker, Akrich, Hassard, and few other authors. One of the most informative sources is the book "Actor Network Theory and After" by Law & Hassard.

As actor-network theory and methodology got translated into ANT, interestingly enough we see here a theory and methodology a subject of its own theorization through the concept of translation and inscription, many researchers have tried their own particular attempts to operationalization of the concepts relevant for their line of inquiry.

The point I'm trying to make is that we have bits and pieces of attempts to operationalize various actor-network related concepts; however, we lack an overall framework. The answer to why is this is pretty much provided in the above-mentioned book in the chapter "On recalling ANT" (by Latour) stating that actor-network was only meant to be a way of doing ethnomethodology and not a theory (p. 19). So, when people talk of ANT it usually means the theorizing of actor-network in various forms and flavors, while actor-network is more of a way for doing ethnomethodology.

Latour makes the argument that the actual acronym ANT is not simply an acronym. BUT, it is a result of the process of translation by the way which actor-network theory and methodology became ANT (with various flavors). So, the process of translation produced multiple ANT-s, each ANT stressing on different concepts as related to the actor-network methodology/theory.

So, as a result it would seem that ANT has different meanings pertinent to the context and the line of inquiry it is used and applied to. The process of translation is given as the reason.

Latour explains this very clearly in the chapter "On recalling ANT".

Similarly to Kylie Veale (in the comments of Dissertation blogs), I also find it interesting and rewarding to write in my blog. Once in a while I go back and read what I have written in the past. It is amazing to find thoughts and ideas that come handy in the present research projects and interests, especially since I'm about to finish my Ph.D. glasswork and embark on my dissertation.

The pseudo-serendipitous discovery in things one has written in the past is not so much of a discovery since you have written it. It is amazing however to try to understand the framework and the mental state present at the time one wrote an earlier blog entry (i.e. the source of the pseudo-serendipitous discovery).

SEs meaning mediation; suppressing controversy

| Permalink

The idea that search engines (SEs) suppress controversy is indeed real. As it is argued in Do Web search engines suppress controversy?, the suppression is not intentional, however, Google's bottom line means good results and quicker, not necessarily attempting to cover all the sides of the story/issue which an information seeker is trying to find information about.

I've tried to explain the sort of mediating power/role by SEs in earlier blog entry: search engines' meaning mediation power,

From File-sharing to bypass censorship:

"By the year 2010, file-sharers could be swapping news rather than music, eliminating censorship of any kind."
...
"Currently, only news that's reckoned to be of interest to Americans and Western Europeans will be syndicated because that's where the money is," he told the BBC World Service programme, Go Digital.
"But if something happens in Peru that's of interest to viewers in China and Japan, it won't get anything like the priority for syndication.

Well, hope it does not come to this because of some political decisions. However, media corporations care only about their bottom line. Thus, who cares if there is censorship due to political decisions or due to media's profit making strategies? In any case, the open content and open communication enabled by the internet seems to be our guard (to a certain degree) against censorship.

secure enough for consumerism, not good enough for voting?!

| Permalink

In the past year or so we have seen various attempts to online voting just to see them scrapped because they are not secure enough. Pentagon Drops Plan To Test Internet Voting is the latest report on such initiative stating that "The Pentagon (news - web sites) has decided to drop a $22 million pilot plan to test Internet voting for 100,000 American military personnel and civilians living overseas after lingering security concerns, officials said yesterday."

How is it that we can't trust security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?

Something is wrong … perhaps the following explains it (from the same article): "The American pullback is in direct contrast to Europe, where governments are pursuing online voting in an attempt to increase participation. The United Kingdom, France, Sweden, Switzerland, Spain, Italy, the Netherlands and Belgium have been testing Internet ballots."

Ref: Media Control: Open communication technologies as actors enabling a shift in the status quo

google's personalized 'jewel'

| Permalink

Google does it again. Like with many of the practical implementations in the search world, Google is first again. First in implementing it in real world, not necessarily in research. As far as research is concerned, personalized searches have been discussed plenty.

This new personalized web search by Google utilizes facet aided searches.

The entire search is dynamic. Once you setup the profile, very simple and menu/directory driven, the left side shows the built query. You can still type a search term. The FAQ shows a bit how things supposed to work.

In any case, the search is operational (beta) and once the relevant docs are returned, there is a small sliding bar that can be moved left-right in order to dynamically relax-restrict the personalization.

Interesting stuff! Just when you think you have learned how Google works! :)

Now, all other search engines would try to do the same. Why don't they start something before Google does it for a change?! What are they afraid off?

(thanks to unstruct.org for the link)

12 Reasons for Growth of Open Source

| Permalink

From Netscape Co-Founder's 12 Reasons for Growth of Open Source:

  • "The Internet is powered by open source."
  • "The Internet is the carrier for open source."
  • "The Internet is also the platform through which open source is developed."
  • "It's simply going to be more secure than proprietary software."
  • "Open source benefits from anti-American sentiments."
  • "Incentives around open source include the respect of one's peers."
  • "Open source means standing on the shoulders of giants."
  • "Servers have always been expensive and proprietary, but Linux runs on Intel."
  • "Embedded devices are making greater use of open source."
  • "There are an increasing number of companies developing software that aren't software companies."
  • "Companies are increasingly supporting Linux."
  • "It's free."

bad scientific/technology journalism or ...

| Permalink

In the Supercomputers Think Fast with New Software article there is no mention of the word 'think', even though it is in the title/subject of the article.

Is this just intentionally bad journalism intended to get people to read the article because they believe computers and thinking are interesting conjectures, or, the journalist really does not know that computers (even supercomputers) can really think but only process information/data.

Talking about social construction of concepts. What goes on in those people's minds who believe computers can think? Do they believe that computers are always right and/or should always be trusted as such?

US societies back expanded free access to research

| Permalink

From US societies back expanded free access to research, courtesy of scidev.net:

Excerpt:
"A substantial number of the United States' leading medical and scientific societies have declared their support for free access to research under certain circumstances — including access by scientists working in low-income countries.

In a statement released this week in Washington DC, 48 not-for-profit publishers, representing more than 600,000 scientists and clinicians and more than 380 journals, pledge their support for a number of forms of free access."

The push is on to shelve part of the Patriot Act

| Permalink

From The push is on to shelve part of the Patriot Act:

Excerpt:
"Discontent about Section 215 has been smoldering; 253 cities and towns across the country have passed nonbinding resolutions expressing opposition to it. It flamed up last month when the American Booksellers Association, the American Library Association, and the writers group PEN American Center announced a drive to collect a million signatures in support of several bills pending in Congress to amend the law. The campaign is supported by a who's-who of publishers, booksellers, and library organizations, including the Barnes & Noble and Borders bookstore chains, publishers Random House and Simon & Schuster, the American Association of Law Libraries, and the Authors Guild."

Theories informing my research

| Permalink | 1 TrackBack

Understanding the implicit and explicit theories of a research article most often means carefully reading through the article for the explicit theories stated therein, and also browsing through the bibliography to see who else or what other theories, frameworks and paradigms have informed the current article. This also provides an insight about which implicit theories the author subscribes too.  To understand authors fully in this respect, it would require reading many of their works.

At the beginning of the Ph.D. program I was unaware of my theoretical framework, or better said, I would have been unable to answer such question if I was asked. At that time I would have thought that I didn’t really subscribe to any particular theory, framework or paradigm. One semester after another I struggled to identify my interests. I wanted to place and find myself within a particular school of thought. This was further complicated by the fact that information science as an interdisciplinary field of study is not yet well define by its theory or paradigm as understood in the traditional sense.

However, as I was writing more and more papers for my coursework, I started realizing that my writings usually concentrate around the subject of information artifacts (i.e. information, information structures, and information systems) and their role in the social structures that utilize them. At this point I decided to re-read all of my papers, four semesters worth. To my surprise and delight, I realized that all this time I was not just writing. I was actually trying to explicate and elaborate (with the language available to me at the time) on how various information technologies effect the social structures around them and concurrently are affected by the same. I recognized this theme throughout my papers.

strange world: Court stops DVD-copying software

| Permalink

From Court stops DVD-copying software:

"A US court has told software company 321 Studios to stop selling a program that lets people copy DVDs."

Hmmm... Where is the logic of this? Why not stop the selling of VCR recorders because they can be used to make illegal copies of movies on video tapes.

Technology isn't the real problem - BUT, it might be

| Permalink

From Technology isn't the real problem:

"A person trapped in the cold can use a cell phone to call a tow truck. Medical advances mean people once doomed are now up and moving. Information - as well as trash and useless drivel - is immediately available on the Internet."
...
"Technology isn't the issue. The problems and the answers are within our hearts, not in our factories."

But let us not forget that technology can be a problem. For example, if the potentials of the nuclear power were not known during the WWII, there would have been no nuclear device capable of indiscriminate mass destruction.

So, rather then claim that "Technology isn't the real problem" or that humans and the human behavior is not the real problem, we should embrace the reality that BOTH humans and technologies can be problems (together or separately from each other), dependent on the context and its immediate as well as distant environments both in time and space.
[see Social constructionism vs. technological determinism,
technology's performative function - limitations and restrictions,
Technology makes us unwitting slaves - BUT it does not have to be that way]

What we need is wisdom to balance the technological and social forces with the intention to improve the human conditions around the world. What we should be concerned is when technology is used to achieve materialistic goals with no concern for human life and human dignity.

"OPEN SOURCE" TO BOOST USE OF NEW TECHNOLOGIES

| Permalink

"OPEN SOURCE" TO BOOST USE OF NEW TECHNOLOGIES

"(AGI) - Rome, Feb. 25 - In order to promote innovation, we need to set up new "open source" models and solutions and projects that are aimed at developing specific solutions for SMEs in Italy and the Public Administration. These projects, as surveys by the Observatory Digital Cities (OCID), Rur and Censis reveal, indicate that the public administration can act as a driving force, by experimenting with and adopting innovative solutions."

The machine that invents ?!

| Permalink

From The machine that invents:

"His first patent was for a Device for the Autonomous Generation of Useful Information," the official name of the Creativity Machine, Miller said. "His second patent was for the Self-Training Neural Network Object. Patent Number Two was invented by Patent Number One. Think about that. Patent Number Two was invented by Patent Number One!"

Is it really possible for machines to 'invent'? Can the machines really discover anything more then what has been imbedded/inscribed into their design implicitly or explicitly by the human designers. Perhaps it would be wiser to say that machines can discover things quicker due to their enormous computing power. But, discoveries and inventions are two different activities.

How infocomm technology can help revive ASEAN economies

| Permalink

From How infocomm technology can help revive ASEAN economies:

"Singapore's Prime Minister Goh Chok Tong has said that ASEAN should harness the advantages of information technology to help its member countries' economies to grow."

The reliance on information and communication technologies to help the economical growth is well justified. However, the potential provided by the info.comm technologies should not be taken out of context. There are other factors such as social, political, policy, environmental, etc., that work hand-in-hand with IT to produce positive results. Information and communication technologies do not get created in isolation. Their successful use and implementation depends to a great extent on the context within which they are being utilized.

is the UN's information society summit doomed to fail?

| Permalink

Why UN's information society summit is doomed to fail provides and interesting analysis about why the UN's information society summit might fail.

Here are the two reasons it provides:

  • The first is the United States' position that profit -- or even the potential for profit -- is more important than the goals of the WSIS.
  • The second reason is procedural. The United Nations prefers to operate by consensus. So as long as any one member of the WSIS objects to a portion of the plan, the plan cannot move forward.

I think that both of these arguments are valid. However, they might not be sustainable over longer period of time. If the Internet is to be one of the driving forces for the economical development of third world economies, it would mean that the corporate grip of the Internet may not be able to survive for to long. Simply said, those affected by the Internet would like to have some say about its operation. As the people effected are not western centric any more, there would be more noises such as those heard at the WSIS.

Whether the UN is the right organization for the worldwide manageability of the Internet only time will show. The WSIS attempt is perhaps just a start. Other ventures will be attempted in the near future. Few things must be ensured though: there should be no censorship on the Internet, its economic potentials should be equally available to all around the world. So, as it appears then, the main problem might not necessarily be with the Internet. Better economies in the third world countries will give them more leverage when the next 'WSIS' comes around.

From Effective use: A community informatics strategy beyond the Digital Divide:

Abstract:
A huge industry has been created responding to the perceived social malady, the "Digital Divide". This paper examines the concepts and strategies underlying the notion of the Digital Divide and concludes that it is little more than a marketing campaign for Internet service providers. The paper goes on to present an alternative approach — that of "effective use" — drawn from community informatics theory which recognizes that the Internet is not simply a source of information, but also a fundamental tool in the new digital economy.

The Digital Library Federation (DLF)

| Permalink

Digital Library Federation:

The Digital Library Federation (DLF) is a consortium of libraries and related agencies that are pioneering in the use of electronic-information technologies to extend their collections and services. Through its members, the DLF provides leadership for libraries broadly by -

  • identifying standards and "best practices" for digital collections and network access
  • coordinating leading-edge research-and-development in libraries' use of electronic-information technology
  • helping start projects and services that libraries need but cannot develop individually.

The DLF operates under the administration umbrella of the Council of Library and Information Resources (CLIR).

Tenets of Actor Network Theory

| Permalink

From FNFACTOR NETWORK THEORY  2001-05-07:

"For what it's worth, here is my own brief outline summary of some of the main ideas of ANT:

1. There is an emphasis on networks and links, as opposed to heroic individual "genuises"

2. The nodes in these networks, called actants, include not just humans, but also non-humans, such as physical objects; they all do some kind of work to maintain the integrity of the network.

3. Individual actants, and groups of actants, in general have different value systems, so that translation among these systems is necessary for a network to succeed; this work is done along the links in the network. Socio-technical compromise is the work done to bring the various technical and social nodes into alignment.

4. The structure of a project can only be seen clearly when these translations (and hence the project) have been successful; hence the values, and even the parts and structure, of a failed project are not in general well defined.

5. The human actors in a project are in a sense sociologists, because they must do acts of interpretation, which in effect are theories of the project; this work should be taken very seriously by sociologists, who should not assume that their own views are necessarily superior to those of the actual participants."

Yet... more things to learn in the new semester

| Permalink

The new semester (Spring 2004) has already started and seems exciting. I'm again a TA (Teaching Assistant), and would be assisting Prof. Wacholder in her two classes similarly to last semester.

As far as my classes are concerned, this semester I'll be taking three classes:
1) Qualitative Research Methods [16:194:603],
2) Current Research Issues [16:194:605], and
3) Experiment and Evaluation in Information Systems [16:194:619].

I also have to complete an independent study which I have already started. This would mean I'll be done with my Ph.D. class work by the end of this semester; than planning for the qualifying exam during the Fall of 2004. :) In the meantime, I’m also working on the dissertation proposal.

A lots of new 'knowledge' (or is it 'information' :)) to learn in this semester.

By Mentor Cana, PhD
more info at LinkedIn
email: mcana {[at]} kmentor {[dot]} com

Recent Comments

  • Katerine Burner: the digital E-books are now very wide published online and read more
  • https://me.yahoo.com/a/0Mff7PBvr_hQF6k.Ggt9.o1j28kesm3a3Ajb#5ce7e: A mixture of both structured thinking and innovative thinking is read more
  • senseworker: In this sense, the explicit theories and frameworks we subscribe read more
  • https://me.yahoo.com/a/TQ4QLDwdzvgvOIZJFmPYPfBPN4Zf#61758: Wow thanks for those links. Despite the fact that these read more
  • Kwan Choi: The advice column moved to the new site. Thank you. read more
  • rah: Now now now! You can't say "listserv" or our friends read more
  • Kumaran. M: Hai, It's a Very nice information for all library & read more
  • Watch free online TV: Broadcasters around the globe are worried at the development of read more
  • Antonius: Understanding our universe and a true science In order to read more
  • Seisdedos: The author, like most politically ignorant writers, was not careful read more