The Yahoo Search for Creative Commons makes it easier to locate Web content with a Creative Commons license. Creative Commons is a nonprofit organization that offers flexible copyrights for creative works. The group builds upon the traditional "all rights reserved" form of copyright to create a voluntary "some rights reserved" copyright, according to Creative Commons. Tools from Creative Commons are free and the organization offers its own search engine.
Recently in Online, access Category
To say that listservs (i.e. news lists and discussion lists) are dead is a bit premature. Discussion lists and news lists serve different needs than RSS and blogs, though they do interchange at certain levels. At best, they are complementary to each other.
For example, I have plenty of news and discussion lists subscriptions, as well as plenty of RSS feeds. Over the past year I have supplemented some of my news lists with RSS feeds whenever possible.
However, as far as discussion lists are concerned, RSS is no replacement for those. Some people do prefer to get their discussion lists in their e-mail, filtering each list into separate e-mail folders. The technical difficulty to setup e-mail filters is not harder than the setup of RSS feeds. Webboards also are not a total replacement for discussion lists. Rather, a generic mix of discussion lists and webboards have sprung.
Also, lets not forget that throughout the world there are plenty of places where broadband is not readily available and will not be available in the near future. Thus, e-mail discussion lists are much easier to deal with, since the e-mails come to you, vs. having to browse badly designed webboards with lots of graphics over slow dial-up connections.
So, rather than saying that listservs (that is lists) are dead, I think they will coexist with other tools such as RSS and Blogs and complement each other since their tasks are different.
Ten major international libraries have agreed to combine their digitised book collections into a free text-based archive hosted online by the not-for-profit Internet Archive. All content digitised and held in the text archive will be freely available to online users.
Two major US libraries have agreed to join the scheme: Carnegie Mellon University library and The Library of Congress have committed their Million Book Project and American Memory Projects, respectively, to the text archive. The projects both provide access to digitised collections.
The Canadian universities of Toronto, Ottawa and McMaster have agreed to add their collections, as have China's Zhejiang University, the Indian Institute of Science, the European Archives and Bibliotheca Alexandrina in Egypt.
SCIENTISTS, CONSIDER WHERE YOU PUBLISH posits challenging issues every author of research papers should starting thinking about. It isn't simple any more to assume that the most prestigious journals are the best venue to publish your research. So what if you have published in a prestigious peer-reviewed journal and not many people can read what you have written due to its subscription cost? How long can this continue? Could this provide some incentive for scholars to publish in open access journals? What then? It is quiet possible that articles published in open access journals might be able to shift the focus of a discipline or a field of study because of their wider availability and accessibility.
Excerpt from the above mentioned article:
For scientists, publishing a paper in a respected peer-reviewed journal marks the culmination of successful research. But some of the most prestigious and soughtafter journals are so costly to access that a growing number of academic libraries can't afford to subscribe. Before submitting your next manuscript, consider a journal's access policy alongside its prestige - and weigh the implications of publishing in such costly periodicals. Two distinct problems continue to plague scientific publishing. First, institutional journal subscription costs are skyrocketing so fast that they outstrip the ability of many libraries to pay, threatening to sever scientists from the literature. Second, the taxpaying public funds a terrific amount of research in this country, and with few exceptions, can't access any of it. These problems share a common root - paid access to the scientific literature.
Very interesting thoughts and ideas. Certainly, in the past technology has been a great source of change; maybe the technologies of today that embody the concept of openness could initiate another socio-economical-political change across the globe.
This paper develops a reflection on the potential of E-democracy to strengthen society's democratization exploring historically and technically the possibilities of cooperative organizations. From Singer's historical view about the rise of capitalism it is conjectured that Internet and E-democracy could be the technological innovations capable to trigger off the creation of a virtual network of cooperative organizations and thereby the development of a new economic system, based more on humanitarian values than the present ones.
This paper (Do Open Access Articles Have a Greater Research Impact?) reports its findings that "freely available articles do have a greater research impact. Shedding light on this category of open access reveals that scholars in diverse disciplines are both adopting open access practices and being rewarded for it."
The findings of this paper have just confirmed what seems to be an obvious argument: the more open the accessibility to articles is, the more they will be used, and thus they ought to have greater impact in research and practice.
An additional question that needs to be addressed in this context is the overall impact of articles published in open access journals. It is quiet possible that articles published in open access journals might be able to shift the focus of a discipline or a field of study because of their wider availability and accessibility.
Who benefits from the digital divide? is a very informative article regarding the digital divide discourse. One would think that such discourse arises with the aim to help the people on the have nots side of the digital divide, by closing the digital divide gap. In this article for First Monday Brendan Luyt shows that the people on the negative side of digital divide are surely NOT the people benefiting from the discourse.
"In this article I have described four groups that have an interest in the promotion of the digital divide issue. Information capital achieves a new market for its products as well as an educated workforce capable of producing those products in the first place. The state in the South benefits through the legitimation conferred through programs designed to combat the divide. Not only do these offer new accumulation opportunities for its elite, they also hold the possibility of defusing discontent over poor economic prospects for the middle class, a volatile section of the population. The development industry, suffering from a neo–liberal attack that views development as irrelevant in the modern world, also benefits from the digital divide. Another gap has been opened up that requires the expertise these agencies believe they can provide. And finally, the organs of civil society are also winners, as they attempt to capture information and communication technologies for their own increasingly successful projects."
Paradoxically, the digital divide discourse does not appear to be helping those it is supposed to help.
In The 'digital divide' and the rest of the population & the digital divide: more than a technological issue I have tried to show that the digital divide discourse might even further increase the existing digital divide gap.
Culture of secrecy hinders Africa's information society covers few interesting ways the mobile telephone technology is being used in Africa. It is evident in the article that the use of mobile technology is being redefined and continually socially constructed by the social and monetary resourced available.
Among the other interesting paragraphs, this one is really revealing:
"The worst thing is that it is a short step from a culture of withholding information to that of becoming information-blind. In other words, when we keep on withholding information, we end up being unable to produce information. We lose the culture of surveying, assessing, classifying – in brief, collecting as much information as possible and storing it in a standardized manner, making it available for use, not only to cater for current specific needs, but also for potential and future ones."
Along the lines of this article's argument, it can also be explained why text messaging is lagging in the US behind Europe and Asia. Most cell/mobile phone service plans in the US come with certain amount of 'free' minutes included in the plan. So, if you have free minutes to use, you use them first before sending any text messages, but also because the mobile telephone devices in the US market are less 'text messaging' friendly. In contrast, in Europe you pay for each minute you talk, and you use text messaging because it is cheaper than talking; thus the social co-construction of the mobile telephony service and the technology, and its use.
States Warn File-Sharing Networks quotes attorneys general of 40 US states as saying:
"In a letter to the heads of Kazaa, Grokster, BearShare, Blubster, eDonkey2000, LimeWire and Streamcast Networks, the attorneys general write that peer-to-peer (P2P) software "has too many times been hijacked by those who use it for illegal purposes to which the vast majority of our consumers do not wish to be exposed.""
There is no doubt that P2P networks are perhaps used for the distribution of copyrighted material. However, the problem with the argument that they could be shut because they are also used to distribute copyrighted material stands on shaky grounds.
Here are some issues with the argument:
- Why stop with the P2P Networks and P2P software? How about the Internet as the enabler of the P2P activities?
- P2P activities are also used by independent artists and other activist to distribute various materials without any copyright infringements
- Nobody seems to have a problem with physical CDs, video tapes, DVDs and other carrier technology (including roads and highways) as an enablers to carry content (copyrighted or otherwise) from point A to point B.
So, the issues on how to deal with the distribution of copyrighted materials should be looked from a different perspective. I think it is more of a social issue rather than technology. The P2P technology is an innovative way for content distribution and it will be very sad if it is destroyed because some people decide to use it in a manner contrary to the pertinent laws.
It seems as if the discourse regarding the reduction or the elimination of the 'digital divide' gap has become a fashion and a trend of a sort. Apparent from the discourse and various tasks aimed at narrowing the gap of the digitally haves and have nots are the forgotten ones, the portion of the population in any society (country, region, etc.) that will probably never get online for variety of reasons.
While the aim of the Maltan government is a genuine one as expressed in the following article (New IT strategy launched to eliminate digital division) with the necessarily inclusion of relevant civic organizations alongside government and corporate organizations: "The Prime Minister and Minister explained that this strategy came about through a wide process of consultation following the setting up of National Council for Information Society (NISCO) which is made up of the governments, unions, political parties, members of civic society and industrial organizations and technology", there is a real concern that the digital divide gap might increase even further by shifting all the efforts towards the 'digital realm' by reducing the attention in the 'non digital realm'.
Considering that a portion of the population will never catch the digital train, an ever emphasis of the 'digital realm' will disenfranchise great many people. It is all well to want everyone on the digital train, serving the public might become more efficient. However, it should not be forgotten that many people will not catch the digital train in their lifetime and they should not suffer because of that. Imagine going to a government office and they tell you that you have to navigate a complex computerized menu systems to obtain certain information, and you have never touched a computer in your lifetime, or you only know how to send e-mail?
While we have many blogging and other social software tools that enable the 'creation' of the collective, how do we harness the "collective intelligence" once it is 'there'/'built'? It would seem that other tools would be needed to enable quick and relevant utilization of the collective intelligence. So far, it appears that the blogging tools have done a great job enabling the representation of the collective intelligence. They lack the function as enablers for utilizing the available collective knowledge.
It seems that the next wave of social network and collaboration tools will/should concentrate more on the function of finding relevant and appropriate 'intelligence' somewhere in the collective pool. Needless to say that search engines are not best suited for this type of activity since they concentrate primarily on topical relevance and do little to nothing about spatial, temporal, methodological, contextual, process, and task specific relevance.
This is a follow-up to my previous entry (A shift in scholarly attention? From commercial publishing to open access publishing) prompted by Open Access? Some Sparks Fly at ALA. (thanks to Open Access News).
In the article, IEEE's Durniak makes the following unsubstantiated statement: "Free open access runs the risk of destroying professional societies."
One can do an extensive analysis to show that the above statement is not necessarily true. However, it suffices to note that commercial publishers are only one of the actors in the scholarly publishing cycle. As such, the totality of the functions performed by the commercial publishers can definitely be taken over by the professional societies themselves, or perhaps by a non-profit umbrella organization that would deal with scholarly publishing for various professional societies.
It is really unprecedented and uncalled for the commercial publishers to claim that without them the entire scholarly publication process will fail and that professional societies will be destroyed. It is indeed true that the commercial publishers provide value added services. However, none of these value-added services are outside of the competency of the professional societies themselves, especially with all the open source software available. Even if it means that the processional societies would have to hire IT staff to deal with the maintenance of the process, it would definitely be less costly than the cost to the host institution for buying back the intellectual output of their staff.
Sooner or later, the commercial publishers will have to relax a bit and see how they can honestly contribute in the process to moving to open access. Their stakeholders might not be happy, but, hey, the dynamic is changing and the power base is shifting.
Can it ever get more clearer than this argument why the publishing of scholarly work should not be in the hands of commercial entities? From A Quiet Revolt Puts Costly Journals on Web:
"Elsevier doesn't write a single article," said Dr. Lawrence H. Pitts, a neurosurgeon at the University of California at San Francisco and chairman of the faculty senate of the 10-campus system. "Faculty write the articles for them, faculty review the articles for them and faculty mostly edit the journals for them, and then we get to buy the journals back from a company that makes a very large profit."
It appears that the players in the process of scholarly publishing (scholars, editors, publishers, etc.) are well aware that the current (i.e. commercial publishing) process will not be sustainable for long. Fueled by the openness of the Internet, scholars and academics have the necessary technology and expertise to publish without the involvement of commercial entities. The money that today is eaten as profit by the commercial entities can definitely be used for further research and academic pursuits.
In the process of the inevitable move from commercial publishing to open access, undoubtfully the entire dynamic of the publishing process will change. But change is not bad. A lot of realignments will occur. The moment established scholars start publishing in open access publications, the tide will turn.
Or, if there is resistance, a shift in the problems addressed by a certain filed or a discipline might occur towards those addressed in the open access journals due to their wider distribution and open access. It would appear then that the move towards open access publishing might even realign the types of problems addressed by a certain scholarly community.
An important analysis in this respect is presented by Kling et al. It suggests that the medium of information transfers and exchange (paper vs. electronic) might induce a shift in the scholarly discourse of a particular discipline. They argue that the highest status scientists usually publish in well-established journals that at the same time usually define the scope and the problems of the field (Kling et al., p.10). Then, the scientists and scholars with a status just a little under the scholars of the highest status are likely to publish in an e-journal (usually open access) due to its speed of distribution and perhaps visibility due to very large readership (Kling et al., p.10). What this could do is that if enough second tier scientist start publishing in e-journals sooner or later the interests and the problems treated in those e-journals for a particular discipline might shift away from the problems treated in the paper journals, due to the speed of distribution, while gaining legitimacy and perception of good quality. This would also mean that the medium is the message (in McLuhan’s sense) where the medium appears to shift the scholarly discourse of a field/discipline.
Kling, R. and Covi, L. M. (1995). Electronic Journals and Legitimate Media in the Systems of Scholarly Communication, The Information Society, 11 (4) 261-271 (Accessed at: http://www.slis.indiana.edu/TIS/articles/klingej2.html)
The public domain discourse surrounding e-voting is very perplexing. Similarly to other articles, E-voting: Nightmare or nirvana? questions the security of e-voting systems and their viability for use in real elections.
"Once the province of a small group of election officials and equipment sellers, e-voting has exploded into the popular consciousness because of a spreading controversy over security and verifiability. Thanks to a concerted effort by opponents and to the missteps of voting machine vendor Diebold Election Systems, most of the news has been bad."
I have said this before in a previous entry (secure enough for consumerism, not good enough for voting?!) and here it is again: How is it that we can't trust e-voting security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?
Yes, there are potential problems with e-voting systems. These are the same issues that trouble all new technologies in the appropriation phase by the users. However, to claim that these issues are worse than those that troubled and still trouble e-commerce systems is absurd.
"The rise of open access publishing of scientific research could jeopardise the entire academic publishing industry, according to the chief executive of Reed Elsevier, the world's largest publisher of scientific journals."
Something will be jeopardized for certain, but it isn't the academic publishing, it is the commercial publishing. As many open access journals and publishing venues have shown, academic publishing does not have to be commercial publishing.
Bo-Christer Björk: Open access to scientific publications - an analysis of the barriers to change?:
"One of the effects of the Internet is that the dissemination of scientific publications in a few years has migrated to electronic formats. The basic business practices between libraries and publishers for selling and buying the content, however, have not changed much. In protest against the high subscription prices of mainstream publishers, scientists have started Open Access (OA) journals and e-print repositories, which distribute scientific information freely. Despite widespread agreement among academics that OA would be the optimal distribution mode for publicly financed research results, such channels still constitute only a marginal phenomenon in the global scholarly communication system. This paper discusses, in view of the experiences of the last ten years, the many barriers hindering a rapid proliferation of Open Access. The discussion is structured according to the main OA channels; peer-reviewed journals for primary publishing, subject-specific and institutional repositories for secondary parallel publishing. It also discusses the types of barriers, which can be classified as consisting of the legal framework, the information technology infrastructure, business models, indexing services and standards, the academic reward system, marketing, and critical mass."
Note how in the passage below (from Open Source as Weapon) the argument is made that the competition soon will move away from the actual code (everyone would have access to the same software code) and into its usage and integration in a particular context.
"Experts tick off compelling reasons why a vendor of closed-source software might release code: to make the product more ubiquitous, speed development, get fresh ideas from outside the company, to complement a core revenue stream, foster a new technology -- and to stymie a competitor.
In fact, giving away some free company IP can go a long way toward making someone else's IP worth beans.
Martin Fink, author of "The Business and Economics of Linux and Open Source," notes that, while all commercial software decreases in value over time, open source drastically speeds the process. The huge community of developers working together can produce a competitive open source product fast, and they'll add features for which a closed-source vendor would want to charge extra.
Finally, customers can acquire the software at no cost, even though they may pay for customization, integration and support."
"The British Broadcasting Corporation's Creative Archive, one of the most ambitious free digital content projects to date, is set to launch this fall with thousands of three-minute clips of nature programming. The effort could goad other organizations to share their professionally produced content with Web users.
The project, announced last year, will make thousands of audio and video clips available to the public for noncommercial viewing, sharing and editing. It will debut with natural-history programming, including clips that focus on plants, animals and birds."
In Prediction Thijs van der Vossen has stated some ideas about how things will be in the future in terms of information and knowledge sharing.
While I agree that what Thij's writes is the desired outcome if we are doing towards a more open world, the outcome is not necessarily so. Yes, information needs to be free so it can be accessed from everywhere, by everyone, through many different devices and access methods. However, the assumption is that the corporate entities will be willing to let go the grip they have on everything information that looks profitable.
So, one of the fundamental assumptions is that all sources of information and knowledge artifacts really want to share their content. In the open source Internet as a possible antidote to corporate media hegemony I have argued that the property of openness (open content and open communication) as a fundamental property of the Internet as we know it today, is perhaps the reason why Thij's predictions look very probably. Hopefully no authoritative entity puts restrictions around what can be said and done online.
Openness, Publication, and Scholarship is an interesting philosophical perspective attempting to frame publications and scholarship within the various concepts of openness such as "open access", "open data", "open source", "open entry", and "open discourse".
To this I like to modify "open data" with "open content", since content has broader scope than data, and perhaps add "open communication" as the functional link between "open access" and "open discourse".
At last, there is a realization that information and communication technologies do not necessarily help the 'disadvantaged and vulnerable groups' by the way of some magic. Given that the tools of the economical development in most cases reflect the social structures within which they function, thus 'favoring' the people in 'power', a concentrated effort is needed to ensure that people less likely to 'magically' benefit from such advances do indeed rip the benefit.
The 'Technologies of a Digital World' conference/Expo seems to be an effort in the right direction. At least they are emphasizing that something other than 'magic' needs to be done.
"Technology is an enabler as well as a catalyst to ensure companies operate profitably and governments operate more efficiently in the global environment. But technology should also be the medium for people from all walks of life to harness the new opportunities offered by ICT, and act as fundamental elements for creating new skills and shaping mindsets to churn the engine of the knowledge-economy."
The Expo and Seminar, first of its kind to be held in Brunei, carries the theme, 'Technologies of a Digital World' and is centred on the development of technologies suited to the disadvantaged and vulnerable groups and the development of affordable technologies to facilitate people's access to ICT.
The idea that search engines (SEs) suppress controversy is indeed real. As it is argued in Do Web search engines suppress controversy?, the suppression is not intentional, however, Google's bottom line means good results and quicker, not necessarily attempting to cover all the sides of the story/issue which an information seeker is trying to find information about.
I've tried to explain the sort of mediating power/role by SEs in earlier blog entry: search engines' meaning mediation power,
In the past year or so we have seen various attempts to online voting just to see them scrapped because they are not secure enough. Pentagon Drops Plan To Test Internet Voting is the latest report on such initiative stating that "The Pentagon (news - web sites) has decided to drop a $22 million pilot plan to test Internet voting for 100,000 American military personnel and civilians living overseas after lingering security concerns, officials said yesterday."
How is it that we can't trust security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?
Something is wrong … perhaps the following explains it (from the same article): "The American pullback is in direct contrast to Europe, where governments are pursuing online voting in an attempt to increase participation. The United Kingdom, France, Sweden, Switzerland, Spain, Italy, the Netherlands and Belgium have been testing Internet ballots."
From US societies back expanded free access to research, courtesy of scidev.net:
"A substantial number of the United States' leading medical and scientific societies have declared their support for free access to research under certain circumstances — including access by scientists working in low-income countries.
In a statement released this week in Washington DC, 48 not-for-profit publishers, representing more than 600,000 scientists and clinicians and more than 380 journals, pledge their support for a number of forms of free access."
"Singapore's Prime Minister Goh Chok Tong has said that ASEAN should harness the advantages of information technology to help its member countries' economies to grow."
The reliance on information and communication technologies to help the economical growth is well justified. However, the potential provided by the info.comm technologies should not be taken out of context. There are other factors such as social, political, policy, environmental, etc., that work hand-in-hand with IT to produce positive results. Information and communication technologies do not get created in isolation. Their successful use and implementation depends to a great extent on the context within which they are being utilized.
Why UN's information society summit is doomed to fail provides and interesting analysis about why the UN's information society summit might fail.
Here are the two reasons it provides:
- The first is the United States' position that profit -- or even the potential for profit -- is more important than the goals of the WSIS.
- The second reason is procedural. The United Nations prefers to operate by consensus. So as long as any one member of the WSIS objects to a portion of the plan, the plan cannot move forward.
I think that both of these arguments are valid. However, they might not be sustainable over longer period of time. If the Internet is to be one of the driving forces for the economical development of third world economies, it would mean that the corporate grip of the Internet may not be able to survive for to long. Simply said, those affected by the Internet would like to have some say about its operation. As the people effected are not western centric any more, there would be more noises such as those heard at the WSIS.
Whether the UN is the right organization for the worldwide manageability of the Internet only time will show. The WSIS attempt is perhaps just a start. Other ventures will be attempted in the near future. Few things must be ensured though: there should be no censorship on the Internet, its economic potentials should be equally available to all around the world. So, as it appears then, the main problem might not necessarily be with the Internet. Better economies in the third world countries will give them more leverage when the next 'WSIS' comes around.
A huge industry has been created responding to the perceived social malady, the "Digital Divide". This paper examines the concepts and strategies underlying the notion of the Digital Divide and concludes that it is little more than a marketing campaign for Internet service providers. The paper goes on to present an alternative approach — that of "effective use" — drawn from community informatics theory which recognizes that the Internet is not simply a source of information, but also a fundamental tool in the new digital economy.
"HIGH-SPEED DIGITIZATION AND THE FUTURE OF LIBRARIES
A robotic scanner, custom built for Stanford University, is systematically digitizing parts of the university library's vast collection -- over eight million volumes. Resembling a giant copier, the 4DigitalBooks robot quickly and automatically scans about 1,000 pages per hour -- a complete 300-page book in 20 minutes. Stanford University Librarian Michael Keller, who oversees the project, says, "It's rigorously consistent -- the page is always flat, the image is always good, and software conversion allows you to index the text so you can search it." Rare books, however, are another matter. "We're very concerned about (them), so we haven't put any manuscripts on the robot. Instead, we use a technology based on the same cameras, (but turn) the pages by hand." In the next 10 to 20 years, Keller believes more and more information will be presented in digital form. "I suspect books will continue to be useful and important, and we'll (still) see them published. But people will find more and more of their information online, and the number of books will decrease." Stanford, for instance, is planning a science and engineering library whose goal is to have no books on the shelves. "We'll still need physical libraries," says Keller, "because people want to meet with one another. They want to work on
projects collaboratively, and they also like to work in clusters and groups." (The Book & The Computer 15 Dec 2003) http://www.honco.net/os/index.html"
"To prove that open sourcing any and all information can help students swim instead of sink, the University of Maine's Still Water new media lab has produced the Pool, a collaborative online environment for creating and sharing images, music, videos, programming code and texts. "
"We are training revolutionaries -- not by indoctrinating them with dogma but by exposing them to a process in which sharing culture rather than hoarding it is the norm," said Joline Blais, a professor of new media at the University of Maine and Still Water co-director.
"It's all about imagining a society where sharing is productive rather than destructive, where cooperation becomes more powerful than competition," Blais said.
"Leaders from nearly 200 countries including 60 heads of state and government will attend the first World Summit on the Information Society (WSIS) in Geneva Saturday aimed at bridging the digital divide between the rich and poor."
"The aim of the United Nations summit is to come up with a global plan to ensure everyone's access to information and communications technologies."
Hopefully the attendants at the summit do not forget that ensuring access to information and communication technologies for everyone does NOT necessarily mean a reduction in the digital divide between rich and poor nations, countries, and peoples.
If history is any indication, we should have already learned that technology alone does not solve social problems, not necessarily, and perhaps not unless it can be shown so. For example, it would be beneficial to hear how does information technology help developing countries escape poverty. It might, if the means of production in the developing countries are improved to build self-sustainable economy based on access to information and information technology in general.
However, considering the conditions around the world at this stage, I would rather expect that activities related to building sustainable local economies (independently if they are related to information technology or no) are more important in escaping poverty. People in the developing countries can have access to all information technology they want (even this process is questionable because to achieve success with information technology one needs to first create the necessary economic conditions in order to bring the access to information technology to majority of the people) and still might not be able to escape poverty unless some sort of sustainable local economy is established to a certain degree.
Who Owns The Facts?
(courtesy of slashdot)
"windowpain writes "With all of the furor over the Patriot Act a truly scary bill that expands the rights of corporations at the expense of individuals was quietly introduced into congress in October. In Feist v. Rural Tel. Serv. Co. the Supreme Court ruled that a mere collection of facts can't be copyrighted. But H.R. 3261, the Database and Collections of Information Misappropriation Act neatly sidesteps the copyright question and allows treble damages to be levied against anyone who uses information that's in a database that a corporation asserts it owns. This is an issue that crosses the political spectrum. Left-leaning organizations like the American Library Association oppose the bill and so do arch-conservatives like Phyllis Schlafly, who wrote an impassioned column exposing the bill for what it is the week after it was introduced."
"This week Australian genetics pioneer Richard Jefferson was recognised by Scientific American, the prestigious international science magazine, as one of the 50 global technology leaders of 2003."
"His latest inventions could unleash a new Green Revolution, giving farmers, researchers and agriculture businesses across the world access to the potential of modern genetics."
"And he’s calling on the global biotechnology community to adopt open access genetics – freeing up the tools of modern genetics and biology from the shackles of excessive patenting."
(my emphasis in bold)
In Broadband net user numbers boom BBC reports on the growing number of broadband (i.e. high-speed) Internet connections at home.
What does it mean? Well, according to Pew Internet Project there is an apparent and substantial difference in social behavior that varies depending on whether you are connected from home via broadband or just plain dial-up connection. The summary of their findings as well as the full report can be found at: The Broadband Difference. They have also published a follow-up report.
(courtesy of Peter Suber at Open Access News)
The Cornell University Library is cancelling "several hundred" Elsevier journals and has explained the reasons why in a public letter. Excerpt: "We can no longer subscribe to so many Elsevier journals (including duplicates) that we no longer need. We must now free up some of the money spent on Elsevier journals to pay for journals published by other publishers that are more needed by our users. We have explained this to Elsevier in lengthy discussions, both through our research library consortium and then independently. We have tried in these discussions to broker an arrangement that would allow us to cancel some Elsevier titles without such a large price increase to the titles remaining --but Elsevier has been unwilling to accept any of our proposals. We are therefore planning to cancel several hundred Elsevier journals for 2004. The decisions on cancellations will be made on the basis of faculty input, as well as several years of statistical information on individual journal use....Once the cancellations are complete, we will list the titles on this site."
"SAN JOSE, Calif.--Evergreen Valley High School has been touted as the future of education in the heart of Silicon Valley, its 1,500-odd students outfitted with school-issued laptops that would create a new learning experience bridging life on and off campus.
"They treated the laptops more like their own personal computer instead of school property," said Dennis Barbata, the principal at Evergreen Valley's School of Science and Technology, which recently banned students from taking the machines home. "I'm not convinced that the laptop is the interface device at this point for a 24/7 computer access program for students."
In the same article they rightfully ask the question: "Does technology do more to improve learning than traditional teaching methods?"
"Washington, DC -- SPARC (the Scholarly Publishing and Academic Resources Coalition), an academic and research libraries initiative, today announced its partnership with the Public Library of Science (PLoS), the groundbreaking organization of scientists and physicians committed to making scientific and medical literature freely available on the public Internet. The alliance aims to broaden support for open-access publishing among researchers, funding agencies, societies, libraries, and academic institutions through cooperative educational and advocacy activities."
"The essence of the open archives approach is to enable access to Web-accessible material through interoperable repositories for metadata sharing, publishing and archiving. It arose out of the e-print community, where a growing need for a low-barrier interoperability solution to access across fairly heterogeneous repositories lead to the establishment of the Open Archives Initiative (OAI). The OAI develops and promotes a low-barrier interoperability framework and associated standards, originally to enhance access to e-print archives, but now taking into account access to other digital materials. As it says in the OAI mission statement "The Open Archives Initiative develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content."
Back in July, prompted by the NewScientist.com's article E-voting system flaws 'risk election fraud' reporting that Diebold Election Systems's e-voting system contains certain flaws that 'risk election fraud', I said I would be more comfortable e-voting if such system is open source where the code is open for public scrutiny.
"While critics in the United States grow more concerned each day about the insecurity of electronic voting machines, Australians designed a system two years ago that addressed and eased most of those concerns: They chose to make the software running their system completely open to public scrutiny."
"Although a private Australian company designed the system, it was based on specifications set by independent election officials, who posted the code on the Internet for all to see and evaluate. What's more, it was accomplished from concept to product in six months. It went through a trial run in a state election in 2001."
Rep. Rush Holt's bill seems a step in the right direction for the US:
"The issues of voter-verifiable receipts and secret voting systems could be resolved in the United States by a bill introduced to the House of Representatives last May by Rep. Rush Holt (D-New Jersey). The bill would force voting-machine makers nationwide to provide receipts and make the source code for voting machines open to the public. The bill has 50 co-sponsors so far, all of them Democrats."
One of the most important findings in this study is that it brings forth the argument that there are social reason why people are not on-line, making it clear that more technology and internet access will not close the multifaceted digital divide gap that exist around the world.
"Bridging the digital divide requires more than simply offering computers and Internet access. Technological fixes won't close the divide unless they take into account the social reasons why people aren't online," Patrick Moorhead, GCAB chairman, said in a statement.
I've tried to present similar arguments in the following entries: the digital divide: more than a technological issue, :: access to information a solution to poverty?!, :: is IT alone really a solution to poverty?, :: the seriousness of equal access to information for all, :: Discord at digital divide talks.
"Black people living in deprived areas have less access to home computers than their white neighbours, a study suggests."
The finding reported by this article is a good step in the right direction to remedy the ever increasing digital divide at various levels and among various groups.
However, more technology will not resolve the problem. The article suggests that the problem of digital divide can be remedied with "... encouraging more people to learn how to use computers.”
Is encouragement the most appropriate remedy? Perhaps the investigators should further look at the underlying socio-economic issues in the deprived neighborhoods that have created the digital divide between black and white neighbors.
"The relentless drive for more intrusive technology to help improve security may result in a society that is less secure, warned Al Gore, former vice president of the U.S., speaking Tuesday at the Carnahan Conference on Security Technology in Taipei."
"The entire ideology of information technology for the last 50 years has been that more information is better, that mass producing information is better," he [Jakob Nielsen] says.
If you are a company somehow related to the management and manipulation of information, certainly more information is better. However, this does not say much about the quality of life, and not much about the quality of information.
"The fix for information pollution is not complex, but is about taking back control your computer has over you."
This is a very profound philosophical statement; certainly not everyone believes that there is a control we have to take from the computers. Just how do we go about tacking back the control anyway? I'm not saying that this is not possible, it is just now easy due to many factors, and one of them being that not everyone believes there is a control to be taken back. As in any solution to a potential problem, one of the most important things in the process of discovering the solution is the ability to diagnose the problem properly. In the case of the information pollution, contextually diagnosing the root of the problem might turn out to be the hardest task.
Public Library of Science (PLoS) has finally published their first issue, Vol 1, Issue 1. Especially interesting is their first article/editorial Why PLoS Became a Publisher that provides the rationale for the open access to scholarly and scientific literature.
"PLoS Biology, and every PLoS journal to follow, will be an open-access publication–everything we publish will immediately be freely available to anyone, anywhere, to download, print, distribute, read, and use without charge or other restrictions, as long as proper attribution of authorship is maintained. Our open-access journals will retain all of the qualities we value in scientific journals—high standards of quality and integrity, rigorous and fair peer-review, expert editorial oversight, high production standards, a distinctive identity, and independence."
"The Internet as we know it is at risk. Entrenched interests are positioning themselves to control the network's chokepoints and they are lobbying the FCC to aid and abet them. The Internet was designed to prevent government or a corporation or anyone else from controlling it. But this original vision of the Internet may soon be lost. In its place a warped view that open networks should be replaced by closed networks and that accessibility can be superceded by a new power to discriminate is emerging."
Scary thoughts.... but indeed very real...
ESCHEWING MOONBEAMS, BERNERS-LEE STICKS TO HIS KNITTING
(ShelfLife, No. 127 (October 9 2003))
Quote: "Asked by a BBC interviewer whether it's a "stupid fear" to worry that the Internet will become a giant brain, World Wide Web creator Tim Berners-Lee replied: "Computers will become so powerful and there will be so many of them with so much storage that they will in fact be more powerful or as powerful as a brain and will be able to write a program which is a big brain. And I think philosophically you can argue about it and spiritually you can argue about it, and I think in fact that may be true that you can make something as powerful as the brain, really whether you can make the algorithms to make it work like a brain is something else. But that is a long way off and in fact that's not very meaningful for now at all. All I'm looking for now is just interoperability for data." (BBC News 25 Sep 2003)
"The development of computer software and hardware in closed-source, corporate environments limits the extent to which technologies can be used to empower the marginalized and oppressed. Various forms of resistance and counter-mobilization may appear, but these reactive efforts are often constrained by limitations that are embedded in the technologies by those in power. In the world of open source software development, actors have one more degree of freedom in the proactive shaping and modification of technologies, both in terms of design and use. Drawing on the work of philosopher of technology Andrew Feenberg, I argue that the open source model can act as a forceful lever for positive change in the discipline of software development. A glance at the somewhat vacuous hacker ethos, however, demonstrates that the technical community generally lacks a cohesive set of positive values necessary for challenging dominant interests. Instead, Feenberg’s commitment to "deep democratization" is offered as a guiding principle for incorporating more preferable values and goals into software development processes."
"Bread or Broadband? The thirteen candidate countries (CCs) for entry into the European Union in 2004 (or beyond) confront difficult choices between "Bread or Broadband" priorities. The question raised in this article is how to put Information Society (IS) policy strategies at the service of social welfare development in these countries, while optimizing their resources and economic output.
The article summarises a dozen original research studies, conducted at the European Commission’s Institute for Prospective Technology Studies (IPTS). It identifies ICT infrastructures, infostructures and capabilities in the CCs, the economic opportunities these may offer their ICT domestic industry, and the lessons from previous IS development experience in the European Union that could possibly be transferable.
The paper concludes that only those trajectories that offer a compromise in the Bread or Broadband dilemma, taking into account both welfare and growth issues, will be politically sustainable."
“It is our mission to make modern technology accessible to everybody,” Leuenberger said. “People living in developing countries can only escape poverty if they have access to information.”
Yes, technology can be an important key to democratic development. It has been often stated that technology will solve the problems of poverty and thus bring about democratic movements. While it might be true that technology has increased productivity in certain areas around the world, it is perhaps very much debatable whether it has decreased poverty in general.
If technology is to deliver democratic 'results', it must be used with that sense and for that purpose by helping the economic development that helps the improvement of the bottom line economies.
Unfortunately, the main players in bringing the information technologies to developing countries around the world are private companies who ultimately care about their bottom line (i.e. $$$); it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.
(I’ve also elaborated on these points in these previous entries: Discord at digital divide talks, is IT alone really a solution to poverty?, access to information a solution to poverty?!, Search engine for the global poor?)
"The Open Archives Initiative and Project RoMEO announce the formation of OAI-rights. The goal of this effort is to investigate and develop means of expressing rights about metadata and resources in the OAI framework. The result will be an addition to the OAI implementation guidelines that specifies mechanisms for rights expressions within the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)."
"The area of rights expressions is wide-open with many organizations proposing languages and mechanisms. Therefore, the OAI-rights effort will aim to be extensible, providing a general framework for expressing rights statements within OAI-PMH. These statements will target both the metadata itself and the resources described by that metadata. In the context of this broader framework, OAI-rights will use Creative Commons licenses as a motivating and deployable example."
The Massachusetts Institute of Technology is making its course materials available to the world for free download
"One year after the launch of its pilot program, MIT on Monday night quietly published everything from class syllabuses to lecture videos for 500 courses through its OpenCourseWare initiative, an ambitious project it hopes will spark a Web-based revolution in the way universities share information."
Let's see how far (in time and space) this ‘revolution’ will reach! Maybe, if each school does not have to (re)create the course materials from scratch, the tuition will go down! :) Or maybe someone will be making more money.
Nevertheless, in terms of information and/or knowledge sharing there ought not to be any doubt that this is a step in the right direction. Hopefully, the potentials can be utilized to benefit the society in general.
"Sharp divisions over how to bridge the digital divide between rich and poor have emerged ahead of a UN summit on the issue in December."
No wonder... with the presence of representatives from the private sector who ultimately care about their bottom line (i.e. $$$), it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.
"African nations have been rallying behind a proposal from Senegal to set up a new 'digital solidarity fund'"
"Many industrialised nations are wary of creating a new UN fund. Instead they favour encouraging investment by private companies and re-directing existing aid."
It appears that the issue of control and profits is the sticky point. So, the question does not seem to be as weather the developing countries should be 'helped' with advanced information technology. See my entry the seriousness of equal access to information for all - Information Summit where I've tried to present my concerns.
Academia Urged To Offer Library Services To Graduates in ShelfLife, No. 125 (September 25 2003):
"Today's college and university students graduate expecting, even demanding, to have continued access to the kinds of information-rich facilities they grew accustomed to and relied on during their student days. So says Clifford Lynch, executive director of the Coalition for Networked Information (CNI), who argues that more must be done to accommodate these expectations. Lynch notes that the transition from an information service within higher education to one broadly available to the public is not always simple or quick. For example, there was a gap of some years between when college and university graduates first started creating demand for the Internet and when the commercial market place was prepared to service this demand, particularly at reasonable prices. Currently the demand for information services focuses on content rather than computation and communication, creating a market for the licensed, proprietary digital content that schools do not own but pay licensing fees for under contract with the publishers and other service providers who hold the rights to the content. Because many suppliers are not set up to license to individuals or want to charge absurd prices, libraries, both public and academic, represent a potential resource to serve both their graduates and the public at large. Lynch suggests that higher education institutions and their faculty have an obligation to put on their agenda the issue of making their information services available beyond their academies' walls. (Educause Review Sep/Oct 2003) http://www.educause.edu/ir/library/pdf/erm0356.pdf"
"All of us have suffered the consequences of poor-quality information. For most of us, most of the time, the impact has minor significance and is of short duration. Perhaps we missed a bus or flight connection as a result of using an out-of-date timetable, or we lost an insurance claim because we failed to note the change in exemptions to the policy when we last renewed. As frustrating or painful as these examples may be, they are rarely fatal. However, in a small percentage of cases, poor quality information has direct, devastating consequences. For example, many of the arguments concerning personal privacy are based on the knowledge that an adverse comment on a person's reputation perpetuates itself, even after a formal retraction is published or a libel case is won. Some sorts of information are more "sticky" than others. Just as the garden weeds are more robust than the desired plants, bad information rears its ugly head more virulently than good information."
As I was attempting to identify few queries (for the TA class I assist the professor) that will result in URLs returned that would have different relevance depending on the user (needs, interests, etc...), I tried to search for the word 'syntax' (in Google) due to its multiple meanings, especially as it relates to the natural language and computer language. The idea was to show that the returned search results have different relevances depending if the search was instigated due to your interest in natural language or computer language.
The results were really surprising! The first 40 or so results were almost exclusively about the syntax of computer languages or some other system syntax. The syntax of the natural language was absent altogether!
Should we be concerned with this? I think so. It is unreal and untrue that the word 'syntax' (as an example) is related only to computers and systems. How would middle school or elementary school children react to these results when searching for the English language syntax?
I've taken the word 'syntax' as an example. There are probably many other words and phrases that search engines provide biased results for, intentionally or not.
Has the word syntax lost its meaning as it is related to natural language? At least this is what the search in Google might suggest to those that rely on learning about what they don't know via searching the web.
In this scenario, Google search results seem to be mediating the meaning of the word 'syntax' and many other words and phrases. It would be interesting to understand why google's search results are biased in favor of computer and systems related terminology, when there are tons of natural language syntax resources on the web.
Should we be concerned over search engines' meaning mediation power about things that affect us in our daily or professional lives?
In An Open-Source Search Engine Takes Shape there is an assumed relationship between open source, open ranking, and fairness of returned results.
Currently, all existing search engines have proprietary ranking formulas, and some search engines determine which sites to index on the basis of paid rankings. Cutting said that, in contrast, Nutch has nothing to hide and has no motive to provide biased search results.
"Open source is essential for transparency," he said. "Experts need to be able to validate that it operates correctly and fairly. Only open source permits this." If only a few Web search engines exist, he said, "I don't think you can trust them not to be biased."
I think this relationship is sounds. How does one test and evaluate that indeed the opens source search engine will result in 'open ranking' algorithms and thus lead to fairness?
The next issue to be dealt with is the scope and the understanding of fairness in the context of search engines. Should fairness be understood as proportional (returned results vs. the total number of searched documents), or equal coverage of the queries even though some topics of interests might be less represented on the internet. In addition, considering that no one single search engine can cover/index the entire webspace, what would be the criteria for domain/URL inclusion for indexing?
I believe that the open source search engine might be better in fairness, but there still remain a lots of issues to be dealt with as important factors in tilting the 'fairness' one way or another.
"At the opening of the third preparatory meeting for the summit in Geneva, Leuenberger set out his recommendations before more than 1,900 representatives from 143 nations, the private sector and non-governmental organisations. Leuenberger added that the main bone of contention was finding ways to finance the summit initiatives and he urged the participating nations to present more concrete ideas by September 26, the last day of the prep talks."
"The three-day summit, which kicks off in Geneva on December 10, hopes to develop an action plan to provide equal access to information for all people around the world."
The initiatives for equal access to information for all the people around the world are to be admired at least for recognizing the importance of access to information in today’s information society (or better said society relaying so much on information exchange).
However, with the presence of representatives from the private sector who ultimately care about their bottom line (i.e. $$$), it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.
What usually happens in such meetings though is that the private sector that controls the means of access as well as the information itself is unwilling to give up some of its power. So, what ends up happening is that the current private-sector players join forces with local private sector players around the world, as if that means equal access. The private sector is interested about the bottom line whether it is in the developed countries or in the developing countries. So, instead of equal access to information for all, the current private sector players extend their control of access to information even further, paradoxically via the vehicles (such as this summit) that were supposed to enable the equal access.
What is a possible solution? Perhaps the state representatives to the Information Summit need to change their policies in terms of access to access technologies and information. These types of summits are good, but ultimately the mains responsibilities reside with the states themselves, with NGOs playing an important role in pushing their governments to enact 'fair' policies regarding access technologies and access to information.
In What's a good learning culture? George presents a very informative and interesting personal experience about satisfying information seeking needs.
Apart from the fact that "information need" seems to be used interchangeably with/for "need for knowledge" (I'm of the opinion that information does not equal knowledge, and perhaps as such the processes to satisfying information needs would differ from those for satisfying knowledge needs), I agree with George that informal means of seeking information have become part of our lives indeed.
In what George has written, few parameters emerge: structured vs. unstructured content, structured vs. unstructured communication (for content delivery), formal vs. information contexts.
Depending on the particular information need at hand, some combinations of the above parameters is applied in the process of information seeking. If we are to identify the tools that help us carry the information seeking process, a distinction will be apparent. For example, e-mail communication is not a structured content. One to one e-mail communication does not appear structured and yet there might be an underlying communication structure (not necessarily apparent) because of the common background between the participants. On the other side, many-to-many communication (i.e. discussion lists) may presents a semi-structured communication process and semi-formal context, depending on how the discussion is run (moderated, semi-moderated, etc.).
"12 September – Information technology should be used to improve the quality of life in developing countries, thus helping to achieve the ambitious goals set by the United Nations Millennium Summit of 2000, Secretary-General Kofi Annan said today."
"Noting that the World Summit on the Information Society is just three months away, he added: 'I hope you will all do your utmost to make it a success, by using it to spread the word about initiatives that make creative use of technology to improve the quality of life in developing countries. By so doing, you will enable others to benefit from your ideas, and to replicate them easily.'"
Lets just hope that the participants at the World Summit on the Information Society do not assume that the very presence and utilization of IT in developing countries will somehow automagically reduce poverty and help the poor.
It has been often stated that technology will solve the problems of poverty. While it might be true that technology has increased productivity in certain areas around the world, it is perhaps very much debatable whether it has decreased poverty in general.
If history is any indication, we should have already learned that technology alone does not solve social problems, not necessarily, and perhaps not unless it can be shown so. For example, it would be beneficial to hear how does information technology help developing countries escape poverty? It might, if the means of production in the developing countries are improved to build self sustainable economy based on access to information and information technology in general.
However, considering the conditions around the world at this stage, I would rather expect that activities related to building sustainable local economies (independently if they are related to information technology or no) are more important in escaping poverty. People in the developing countries can have access to all information technology they want (even this process is questionable because to achieve success with information technology one needs to first create the necessary economic conditions in order to bring the access to information technology to majority of the people) and still might not be able to escape poverty unless some sort of sustainable local economy is established to a certain degree.
"A collaborative Digital Library is a user-centered system. In addition to the traditional purpose of providing resource discovery services, the system might also provide specialized services for some classes of users, ranging from basic alerting and selective dissemination services to complex, virtual community working spaces. In this sense the Digital Library represents a special workspace for a particular community, not only for search and access but also for the process, workflow management, information exchange, and distributed work group communications. But most digital library models are based on non-digital environments. As a result, the perceptions of users and the roles they play are biased by traditional views, which might not be automatically transferable to the digital world. Nor are they appropriate for some new emerging environments. New models are challenging traditional approaches. In many cases they redefine the roles of actors, and even introduce new roles that previously did not exist or were not performed by the same type of actor. With no means of formal expression, it is difficult to understand objectively the key actor/role issues that arise in isolated Digital Library cases, or to perform comparative analysis between different cases. This directly affects how the Technical Problem Areas identified by the June 2001 DELOS/NSF Network of Excellence brainstorming report will be addressed. The report states that the highest-level component of a Digital Library system is related to the system's usage. By understanding the various actors, roles, and relationships, digital libraries will improve their ability to enable optimal user experiences, provide support to actors in their use of Digital Library services, and ultimately ensure that the information is delivered or accessed using the most effective means possible. (Report, DELOS/NSF Working Group, 13 June 2003)"
"Greenstone is a suite of software for building and distributing digital library collections. It provides a new way of organizing information and publishing it on the Internet or on CD-ROM. Greenstone is produced by the New Zealand Digital Library Project at the University of Waikato, and developed and distributed in cooperation with UNESCO and the Human Info NGO. It is open-source, multilingual software, issued under the terms of the GNU General Public License"
"DSpace is a groundbreaking digital institutional repository designed to capture, store, index, preserve, and redistribute the intellectual output of a university’s research faculty in digital formats."
"Developed jointly by MIT Libraries and Hewlett-Packard (HP), DSpace is now freely available to research institutions worldwide as an open source system that can be customized and extended. DSpace is designed for ease-of-use, with a web-based user interface that can be customized for institutions and individual departments."
"August 25, 2003 — Public Printer, Bruce R. James, and Archivist of the United States, John W. Carlin announced an agreement that will enable the Government Printing Office (GPO) and the National Archives and Records Administration (NARA) to ensure free and permanent access to more than 250,000 federal government titles available through GPO Access (http://www.gpoaccess.gov)."
"A more recent study carried out by the American Association of Law Libraries, “State by State Report on Permanent Public Access to Electronic Government Information,” defined permanent public access “as the process by which applicable government information is preserved for current continuous and future public access.”"
Curtesy of Open Access News:
"Catherine Zandonella, Economics of open access, TheScientist, August 22, 2003. The good news: she covers the controversy in detail, moving well past the cliches and misunderstandings common just a few months ago. The bad news: except for one line on PubMed Central, she ignores the economics of open-access archives. (PS: For the record, she also misquotes me. I said that even if an open-access journal publisher went out of business or were bought by a commercial publisher, the back runs of its open access journals would remain openly accessible, not that they would remain in the "public domain".)"
MIT's OpenCourseWare project, is yet another manifestation of the philosophy of 'openness'.
"According to the web page, all major search engines have proprietary ranking formulae, and some other engines index sites depending on payment.
The developers claim that Nutch will not use such techniques but they admit there's a considerable challenge ahead."
For more info visit the Nutch website.
"The Open Archives Initiative develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content. The Open Archives Initiative has its roots in an effort to enhance access to e-print archives as a means of increasing the availability of scholarly communication. Continued support of this work remains a cornerstone of the Open Archives program."
"A shift to an open-access model of publishing would clearly benefit science, but who should pay?"
Well, if the research is funded by taxpayers' money (federally funded research), it would be appropriate for the end user to have free access to such scientific information. This still calls for organizing structure to maintain and disseminate the research in terms of journals and other publications.
“The PLoS plan is simple in concept: Instead of having readers pay for scientific results through subscriptions or other charges, costs would be borne by the scientists who are having their work published -- or, practically speaking, by the government agencies or other groups that funded the scientists -- through upfront charges of about $1,500 an article.”
bq. “The shift is not as radical as it sounds, the library's founders argue. That is because government agencies and other science funders are already paying for a huge share of the world's journal subscriptions through "indirect cost" grants to university libraries, which are the biggest subscribers.”
In response to George's entry Open Source as a Social Movement I would like to add that open source should be looked beyond the software space. Open source software is just one manifestation of the open source philosophy, and the open source as a social movements is yet another manifestation of the open source philosophy--in a way more abstract than the open source software given the practical results, its products, as explained in Open Source as a Social Movement.
The 'source' in open source can mean different things to different people and contexts, depending on the level of abstraction and/or pragmatics:
- to the software development is the code
- to the publishing function it the content therefore the 'open content'
- to the access function is the process of communication, therefore 'open access'
Independently of the various manifestations of the open source, there appear to be two important factors in trying to understand and elaborate the various manifestations: the open content and open communication, aided by the concept of translation. I have elaborated many of these items in the corresponding entries [follow the links] as well as in the following two categories: Open Content and Open Communication, The Open Source Philosophy, and Actor-Network theory & methodology.
From a more social perspective, in the open source Internet as a possible antidote to corporate media hegemony it is argued that the open source Internet, as a result of open source movement, manifests itself as a possible antidote to the corporate media hegemony, not only in the US but also throughout the world.
It has been often stated that technology will solve the problems of poverty. While it might be true that technology has increased productivity in certain areas around the world, it is perhaps very much debatable whether it has decreased poverty in general. That is why I read with skepticism the following statement by Switzerland's Communications Minister Moritz Leuenberger, in Switzerland sees technology as key to democracy, speaking at a conference on e-Government:
“It is our mission to make modern technology accessible to everybody,” Leuenberger said. “People living in developing countries can only escape poverty if they have access to information.”
If history is any indication, we should have already learned that technology alone does not solve social problems, not necessarily, and perhaps not unless it can be shown so. For example, it would be beneficial to hear how does access to information help developing countries escape poverty? It might, if the means of production in the developing countries are improved to build self sustainable economy based on access to information and information technology in general.
However, considering the conditions around the world at this stage, I would rather expect that activities related to building sustainable local economies (independently if they are related to information access or no) are more important in escaping poverty. People in the developing countries can have access to all information they want (even this process is questionable because to achieve such access to information one needs to first create the necessary economic conditions in order to bring access to information to majority of the people) and still might not be able to escape poverty unless some sort of sustainable local economy is established to a certain degree.
As an addition to my previous entry regarding the Framing the Issue - Open Access by ARL, it is informative to note that the Public Library of Science (PLoS) emerges as a practical attempt to establish such open access scientific/research publication. In A Fight for Free Access To Medical Research it is written:
"Why is it, a growing number of people are asking, that anyone can download medical nonsense from the Web for free, but citizens must pay to see the results of carefully conducted biomedical research that was financed by their taxes?"
Here is the role of the PLoS:
"The Public Library of Science aims to change that. The organization, founded by a Nobel Prize-winning biologist and two colleagues, is plotting the overthrow of the system by which scientific results are made known to the world -- a $9 billion publishing juggernaut with subscription charges that range into thousands of dollars per year."
and the benefit of open access:
"For scientists, the benefits would extend well beyond being able to read scientific papers for free. Unlike their ink-on-paper counterparts, scientific papers that are maintained in open electronic databases can have their data tables downloaded, massaged and interlinked with databases from other papers, allowing scientists to compare and build more easily on one another's findings."
Do we need any more arguments about why taxpayer funded research publications should be accessible for free? Yes, we could go on and on trying to explicate the benefit of free and open access to scientific information, as many have done. However, the above argument is simple and convincing. :) Perhaps not to the commercial publishing enterprises.
The following definition of open access to scholarly and scientific information is provided:
"As used by ARL, open access refers to works that are created with no expectation of direct monetary return and made available at no cost to the reader on the public Internet for purposes of education and research. The Budapest Open Access Initiative stated that open access would permit users to read, download, copy, distribute, print, search, or link to the full texts of works, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the Internet itself."
The argument is that any government funded research (and its corresponding publications) should be free to be accessed by anyone. This is rather a specific proposition related to government funded research. How about open access to all scholarly publications? What factors need to be in place to make this happen? For pros and cons argument please see open access to scientific information.
DIGITAL TECHNOLOGY: DOES IT PAY?
Economic Factors of Digital Libraries
"The literature is full of articles about digital projects, new technologies and methods, research, development and user studies, but the economic aspects of managing digital content and establishing digital libraries are woefully under-represented. In this issue of the Journal of Digital Information (JODI) dedicated to the theme of economics, the editors grapple with the choices made by individuals, institutions and communities as they work to balance the desire to go digital with the reality of scarce resources. There are several components to be considered in cost-evaluating digital libraries. In addition to the immediate start-up costs of either creating or purchasing digital content, institutions have to consider the expenses associated with providing patrons with access to that content, as well as the implicit costs of preserving, managing and maintaining digital resources for the long term. One problem is that, instead of replacing
print content, electronic journals are often treated as a value-added service, meaning that the library budget appears to be shrinking for the same amount of information resource. (Journal of Digital Information 9 Jun 2003)"
"This paper proposes the creation of an Augmented Social Network (ASN) that would build identity and trust into the architecture of the Internet, in the public interest, in order to facilitate introductions between people who share affinities or complementary capabilities across social networks. The ASN has three main objectives: 1) To create an Internet-wide system that enables more efficient and effective knowledge sharing between people across institutional, geographic, and social boundaries; 2) To establish a form of persistent online identity that supports the public commons and the values of civil society; and, 3) To enhance the ability of citizens to form relationships and self-organize around shared interests in communities of practice in order to better engage in the process of democratic governance. In effect, the ASN proposes a form of "online citizenship" for the Information Age."
Certainly an interesting concept. Perhaps this is one step towards the publishing of research material free from comercial publishers.
In In DSpace, Ideas Are Forever NYT reports on institutional libraries (i.e. digital library repository) and the publishing practice.
"The Journal Backlash Institutional repositories are novel in that much of their content sidesteps academic publishers, which have come under attack from the so-called open-access movement. Some scholars complain that journals delay publication of research and limit the audience because of their soaring costs."
"Out of frustration with journals' limitations, some scientists have started their own archives."
Certainly there seems to be a momentum, rightfully so, against the bureaucratic delays in publishing research articles by publishers of journals and other research periodicals. It appears that the open access movement might be restructuring the publishing of research material in a fundamental way.
However, before any major change does happen, the issues of authority will have to be fundamentally changed in researchers’ perceptions. Whatever authority lies within the peer-reviewing process of a particular journal, will perhaps have to shift to individual universities or other non-for-profit institutions.
Courtesy of Information Literacy Weblog:
"For those interested in information society issues, and interesting website is World-Information.Org This is "a collaborative effort of organizations and individuals who are directly concerned with issues of participatory involvement in Information and Communication Technologies, and the Internet as we know it today." It involves artists, scientists and others, and encourages a creative and critical approach to the internet and digital media. They organise conferences and exhibitions (with some online material), and their Read me section includes some interesting material (e.g. on "disinformation", the role of government intelligence etc.)"
"The Information Access Alliance believes that a new standard of antitrust review should be adopted by state and federal antitrust enforcement agencies in examining merger transactions in the serials publishing industry. When reviewing proposed mergers, antitrust authorities should consider the decision-making process used by libraries – the primary customers of STM and legal serial publications – to make purchasing decisions. Only then will these mergers be subjected to the degree of scrutiny they deserve and adequate access be preserved."
A noble and very practical effort .... Let’s just hope that the 'right' ears are listening and the powerful publishing corps do not block this effort. See my arguments in open access to scientific information, a response to the article Free Public Access to Science—Will It Happen? (July 7, 2003).
(courtesy of ShelfLife, No. 116 (July 24 2003))
"Libraries are collaborative by nature, sharing expertise, staff and ideas. Shared cataloguing is a good example: a cataloguer in one library creates a record about a book for use in a central database rather than just his own system, and everyone else who contributes to that database can download that record into their local systems rather than re-doing it themselves.
Now librarians are talking about extending that collaboration and "deep sharing" digital content by creating a Distributed Online Digital Library. The DODL would depart from the status quo in terms of function, service, reuse of content and library interdependency. First, it would allow a common interface for distributed collections, rather than the widely divergent "looks" of today's linked collections. Second, and more radically, it would allow both librarians and end users to download digital master files as malleable objects for local recombinations. This means they could be enriched with content from librarians or teachers, specially crafted for particular audiences, and unified in appearance and function. A user could download, combine, search, annotate and wrap the results in a seamless digital library mix for others to experience. The services such deep sharing could provide are staggering, and the economics are just as attractive. Imagine 30 libraries coordinating to digitize their collections. Each funds individual parts of the project, but all equally share in the sum of their efforts. So for the cost of building one digital object and depositing it in the DODL, each library would gain 30 downloadable objects. As participation becomes more widespread, the equation becomes even more compelling. (Educause Review Jul/Aug 2003) http://www.educause.edu/ir/library/pdf/erm0348.pdf"
Open Access News is an excellent up-to-date blog dedicated to:
"Putting peer-reviewed scientific and scholarly literature on the internet. Making it available free of charge and free of licensing restrictions. Removing the barriers to serious research. "
(found this link via ResourceShelf)
MIT DEVELOPING SEARCH ENGINE FOR GLOBAL POOR
"Researchers at the Massachusetts Institute of Technology (MIT) argue that existing Web technologies cater to "Western" users, who are "cash-rich but time-poor." Users in poor countries, they say, where phone lines can be hard to come by and many Internet connections are extremely slow, are in a very different boat: little money but lots of time. To address this gap, researchers are developing a search engine that sends requests by e-mail to MIT, where computers perform searches and return e-mail lists of filtered results the next day. The premise of the system, according to MIT's Saman Amarasinghe, is that "developing countries are willing to pay in time for knowledge." Because those who could benefit from the search engine have only very slow Internet connections, the software is being distributed on CDs to users in developing countries."
A novel approach indeed. If this idea proves successful, hopefully, it does NOT get appropriated as THE solution. Some may find it 'unnecessary' to upgrade their facilities because 'they have a solution'. In this process, the NGO's and other foreign non-profit organizations might be tempted to reduce the funding needed to improve the necessary information infrastructure that will make the MIT software/process obsolete.
The bottom line: hopefully the 'patch' is not seen as the proper cure.
"The grid is widely regarded as the next stage for the Internet after the World Wide Web. The Web is the Internet's multimedia retrieval system, providing access to text, images, music and video. The promise of the grid is to add a problem-solving system."
"Our belief was that open source was the best way to maximize adoption," he said. "Globus is an infrastructure technology, and it is only going to be successful if everyone uses it. And if you're doing something that is primarily funded by the government, sharing the software seemed the most appropriate thing to do."
Apparently, the difference between grid computing and distributed computing is in the ability to provide for 'collective' problem solving.
"The Ninth Circuit Court of Appeals ruled last Tuesday that Web loggers, website operators and e-mail list editors can't be held responsible for libel for information they republish, extending crucial First Amendment protections to do-it-yourself online publishers.
Online free speech advocates praised the decision as a victory. The ruling effectively differentiates conventional news media, which can be sued relatively easily for libel, from certain forms of online communication such as moderated e-mail lists. One implication is that DIY publishers like bloggers cannot be sued as easily."
I guess AOL is settling for a very poor choice of words by calling “AOL Journals” what everyone else is calling ‘blogs’ and ‘weblogs’. While AOL might not be helping in the ‘blogging’ discourse, their choice of words will not make the phenomenon any less of a phenomenon.
It appears though that AOL is trying to appropriate part of the “AOL Journals” ecosystem (see, what would AOL call the new ecosystem if not ‘blogshpere’?). Why would someone contribute with content that AOL might use it for further profits? I would like to believe that AOL’s move is not initiated for profit purposes, but, what is the corporate incentive?
One can also argue that AOL’s choice of words is actually counter productive because it seems to remove from the bloggers the most powerful incentive: the feeling that their individual blog is their own and not AOL’s.
"The Democracy in Cyberspace Initiative of the Information Society Project (ISP) at Yale Law School wants to promote democracy by developing best practices technologies and models to strengthen democracy both on-line and off. In particular, we want to cataylze the development of technologies and processes that move beyond the "thin" 'patron-client' model of government where government is a procurer of goods and purveyor of services, to focus on participatory and deliberative forms of strong democratic life. We are interested in realizing technology's potential to improve civic life and help citizens take an active and informed role in their own governance."
In The Network Is The Computer John Hiler presents an analogy between ants and their colonies and the blogs and blogsphere. An interesting analogy.
How does one go about analyzing this analogy further and perhaps providing explication about the topology called 'blogsphere'? What should the properties of the blogs and the way they are connected amongst themselves be to construct a blogsphere?
Perhaps we should be talking about multitude of blogspheres categorized based on topical, temporal, spatial, methodological, contextual, situational, or cognitive relevance.
In how blogs effect each other I've suggested to use the actor-network theory its methodology as the appropriate framework to study the way blogs (the actual actors) are interconnected amongst themselves into a network topology (or the blogsphere).
"For years, community activists and politicians around the country have talked about the need to help people who have been left behind in the digital revolution because of poverty, disabilities or fear of new technology. Without computer literacy, the argument goes, disadvantaged groups will become more excluded in the high-tech economy. Yet many efforts have meant little more than making it possible for people to surf the Web from a library terminal."
"It [WinstonNet] will allow any resident with a library card to have an e-mail account; transact business with the city, like payment of parking tickets; and store homework or other documents on a central server so they can be easily retrieved from any site on the network."
Well intentioned project with the attempt to narrow the digital divide gap. However, as in many other similar project, the most important aspect is not addresses and thought of: Just how does the technology by itself fit within the relevant social structures and fix the underlying social problems that have resulted in the digital dive?
Don't get me wrong, technology can be a great tool, but, it must be well planned to result in positive outcomes for the desired groups. Otherwise, it might just reinforce the existing social structures without any remedy to the digital divide.
From The Blogging Revolution:
"Think about it for a minute. Why not build an online presence with your daily musings and then sell your first book through print-on-demand technology direct from your Web site? Why should established writers go to newspapers and magazines to get an essay published, when they can simply write it themselves, convert it into a .pdf file, and charge a few bucks per download? Just as magazine and newspaper editors are slinking off into the sunset, so too might all the agents and editors and publishers in the book market.
This, at least, is the idea: a publishing revolution more profound than anything since the printing press. Blogger could be to words what Napster was to music - except this time, it'll really work. Check back in a couple of years to see whether this is yet another concept that online reality has had the temerity to destroy."
Indeed, established writers do not have to go to newspapers, magazines and book publishers for wide distribution of their writings. However, the fact that they are established is the key point. How does one become an established writer only through online presence?
An online presence does not have the credibility and the authority of the printer press, at least not yet. And, sooner or later such credibility and authority, unlike until now, would probably come directly via the web. Linking and ranking is perhaps one way. Some sort of online-publisher-certification might appear here and there. Nevertheless, if the online work itself is to be the basis for authority and credibility, the blogging is just leading it.
Additionally, wide distribution of writings is usually one of the key reasons why writers prefer one publishing venue over another. The point is to be read. So, unless an online presence attracts a massive audience, how can a writer be widely distributed? A good comparison would be to the innovation of the printed press and the rise of the book as an agency for social change.
Perhaps it is neither a revolution nor evolution... it is both at the same time as previous work, independently of the medium of distribution, almost certainly affects the future works of an author... First however, instances of purely online credibility and authority have to happen.... and if it already has not happened, it probably will soon. Second, a critical mass is needed both for gaining the credibility and authority and at the same time having the readership. This also will take a mixture of online and offline publishing for some time.
Would this make blogging to online publishing as the printing press was to the book?
In FCC official: No need to regulate ISPs CNET reports FCC official as saying:
"There is no need for the Federal Communications Commission to adopt rules to address concerns that high-speed Internet service providers will favor some Web sites over others, an agency official said on Friday."
Is FCC sleeping or something? It is very obvious that internet service provides (as access agents) care only about their bottom line (profits!) and do not want potential profits to surf away to their competitors or to other content providers with whom they do not have mutual agreements.
The main concern however is that if the ISPs are not required to truly provide an open access (i.e. roaming the internet space without restriction) to their customers, the access to the non-for-profit and other activist organizations' websites would suffer. The ISPs could also use their power to restrict access to websites critical of their business practices.
Further, discrimination to content access might also negatively effects innovation:
"The threat of discrimination against content undermines investment and chills innovation," said Mark Cooper, research director at Consumer Federation of America. " We cannot risk having the monopolist destroy the innovative environment of the Internet. It's just too big of a risk to the public interest."
In enunciating the third law (“EVERY BOOKS ITS READER”) Ranganathan states that this law: “would urge that an appropriate reader should be found for every book” (p.258). The implication would be to build a digital open access system where users can remotely browse and access all digital information objects in a digital library.
From another point of view EVERY DIGITAL INFORMATION OBJECT ITS READER/USER could mean that there must be a purpose behind digital library’s acquisition (or buying licenses) of a particular digital content. If a user/reader for a particular digital content is not always in sight, what is the point in a digital library to ‘carry’ it? But then, here is the challenge: who determines and knows what digital collections a digital library should ‘carry’ when its scope and user base is potentially more versatile due to the global nature?
Ranganathan, S. R. (1957). The five laws of library science. London: Blunt and Sons, Ltd. pp. 11-31, 80-87, 258-263, 287-291, 326-329
The Open Access page at the Center for Digital Democracy (CDD) presents a critical viewpoint about the need and the necessity of open access in the midst of the corporate attempt to control all major access channels.
Besides the need for open access, there is a need for open content and open communication if there is to be a viable and substantial public discourse on digital democracy.
In some of my previous entries I’ve suggested that the actor-network theory and methodology can be used as a mode of explanation in elaborating the interplay between social structures and information (and IT in general). The factor ‘openness’ emerges as the main ingredient in the elaboration when using actor-network theory to explain how actors in a given topology can affect other actors, and also at the same time being affected by them.
The explanatory power of the actor-network methodology relies on the fact that in the same topology both human and non-human actors (elements, structures, processes, etc.) are treated as equally able to affect and influence each other. The affect is carried via the links between the various actors attempting to inscribe their attributes and properties into other actors with congruent properties and attributes (see: Translation).
So, is the Internet open-source?
Or, a more appropriate question would be: is it possible to produce an open communication medium such as the Internet without the open-source software?
Basing this argument on the actor-network theory and methodology and the openness factor, had the software that was used to build the Internet been a closed source software hidden from outside scrutiny, the resulting product, the Internet (whether we see the Internet as a mass medium, a publishing phenomenon, a set of communication tools, etc.) would not have been as open as we see it today. Why?
To use the actor-network language and the openness factor, the closed-source software is almost totally closed in both aspects: its content and its communication. With a closed content (i.e. the code) it is much harder to build compatible and interoperable software tools and much harder to make people use it. Modification to the closed-source software is limited to a very small group of people whose agenda is driven by the bottom line: profit. This suggests that the not so open content and not so open communication about the content is indeed a stagnating force in the exchange of ideas, thoughts and opinions, and innovation in general.
The open content and open communication concepts (with their attributes and properties) are indeed positively responsible for the openness of the Internet. Whether the open-source software is directly responsible for the openness of the Internet, or both the open source software and the Internet openness are both results of the open source philosophy is not very important.
In any case, the open content and open communication concepts have inscribed their properties and attributed onto the openness of the Internet (with varying degrees depending on the various form and flavors the Internet is being used) and also onto the open source software.
Kling’s article addresses interesting issues in relation to how computing has effected social structures, both institutional (corporate and non-corporate) and public, and also how the underlying social structures have influenced computing. The article ought to be read in light of the fact that it was published in 1980 and that it is a meta-analysis. It examines various studies and research that have analyzed computing and computers from 1950-1979. Besides, we need to be mindful that the notion of computing and computers prior to 1980 was somewhat different that the way we perceive it today. Considering that there were 200,000 computers in use in US (Kling, p. 63), it gives us roughly one computer per one thousand people, with rough estimate of 200 million people living in the US.
In addition, the pervasiveness of computing technology of pre 1980 was very low compared to today. At that time, computers were mostly expensive central mainframes used by corporations, institutions and the government agencies, with terminal access only, and used strictly for business. The concept of personal computer as we know it today was only idea for the future. So, the actual ‘use’ of computers was perhaps few magnitudes lower than one thousand users per computer. Many users were only secondary users of computer functions/services usually via intermediary, such as police officers in the field during their work hours checking on police records via dispatchers. Further, the computer technology in pre 1980 was primarily used as data processing aid for cranking reports, statistical analysis and efficient and accurate reporting. This mechanical viewpoint of computers reinforces the idea that computers are like any other resource at manager’s disposal to be used for the goals of the institution and corporations, regardless if used for innovation, work, life, decision making or organization power.