Recently in Open Content, Comm. Category

What if they gagged Gutenberg? Big telecom is trying to throttle free access to democratic Internet

Excerpts:
Five-hundred years ago, we had Johann Gutenberg, a German metalworker and inventor who pioneered the precursor to the Internet. His printing press became the first practical mass communications medium utilizing what was then an advanced memory technology -- paper.

Soon after, there was Martin Luther, a German theologian and priest who fervently believed the church had departed from the teachings of the Bible. In 1517, Luther began printing pamphlets condemning the church, and within several months his 95 Theses was being read all over Europe.

...

Imagine if the leaders of 16th century Germany, feeling threatened by the democratizing forces of the printing press, had taken Gutenberg's invention and limited its use to those they politically agreed with -- or if Luther had to pay licensing fees for nailing up his 95 Theses on every church door in Germany.

That's what big telecom is trying to do: shut the democratic architecture of the Internet. By creating two "tiers" -- one that is fast and charges fees to Web site owners -- and a second class Web that is cheaper and slower and could limit access to independently run sites -- big telecom is hoping to make a larger profit off the Internet.

In other words, opponents to the Internet's open and free access are trying to change the rules -- and they're trying to mislead you, claiming that they're against regulation and that they only want you to pay for the rising cost of their "pipes." That's information warfare.

Open Content Alliance Rises to the Challenge of Google Print

| Permalink

Open Content Alliance Rises to the Challenge of Google Print

Excerpt:

October 3 , 2005 — What a great idea! Why didn’t we think of that? Google Print’s ambitious effort to digitize the world’s book literature has inspired others to initiate their own effort. And, with the Google Print program caught in the snag of a copyright lawsuit, the sight of a relay race handoff keeps hope burning for a brighter digital future. The just announced Open Content Alliance (OCA; http://www.opencontentalliance.org) creates an international network of academics, libraries, publishers, technological firms, and a major search engine competitor to Google—all working on a new mass book digitization initiative. The goal of the effort is to establish a flexible, open infrastructure for bringing large collections of digitized material into the open Web. Permanently archived digital content, which is selected for its value by librarians, should offer a new model for collaborative library collection building, according to one OCA member. While openness will characterize content in the program, the OCA will also adhere to protection of the rights of copyright holders.

OCA founding members include the Internet Archive; Yahoo! Search; Hewlett-Packard Labs; Adobe Systems; the University of California; the University of Toronto; the European Archive; the National Archives (U.K.); O’Reilly Media, Inc.; and Prelinger Archives. The Internet Archive (http://www.archive.org), which is led by Brewster Kahle, will provide hosting and administrative services for a single, permanent repository. Technological and some financial support will come from Adobe and Hewlett-Packard. Yahoo! Search will supply initial search engine access as well as technological support and some funding.

Yahoo launches Creative Commons search

| Permalink

From Yahoo launches Creative Commons search:

Excerpt:
The Yahoo Search for Creative Commons makes it easier to locate Web content with a Creative Commons license. Creative Commons is a nonprofit organization that offers flexible copyrights for creative works. The group builds upon the traditional "all rights reserved" form of copyright to create a voluntary "some rights reserved" copyright, according to Creative Commons. Tools from Creative Commons are free and the organization offers its own search engine.

From Preparing tomorrow's professionals: LIS schools and scholarly communication:

How are LIS schools preparing tomorrow's academic librarians to deal with the emerging changes in scholarly communication? What more can they do? In this brief overview, we will look first at specialized courses dealing with various aspects of scholarly communication that have been added to the curriculum in many schools. The next section will look at how existing courses have been modified to include scholarly communication. Finally, we will explore the benefits of field experience, graduate assistantships and participation in institutional projects.

The authors present some interesting insights about the type of current curricula throughout the US schools.

As a conclusion, I think that there should be a stronger emphasis on the role and the implication of digital libraries (DL) and open access (open content, open communication) in scholarly communication. Understanding DLs both as social as well as technological constructs is important because most of the scholarly communication is mediated through some flavor of DL. Knowledge about open access (and open content, open communication) is critical because as an actor in the web of scholarly communication, the concept of openness as related to content and access seems to be influencing and shifting the research focuses of many disciplines.

Internet Archive to build alternative to Google

| Permalink

From Internet Archive to build alternative to Google:

Excerpts:
Ten major international libraries have agreed to combine their digitised book collections into a free text-based archive hosted online by the not-for-profit Internet Archive. All content digitised and held in the text archive will be freely available to online users.

Two major US libraries have agreed to join the scheme: Carnegie Mellon University library and The Library of Congress have committed their Million Book Project and American Memory Projects, respectively, to the text archive. The projects both provide access to digitised collections.

The Canadian universities of Toronto, Ottawa and McMaster have agreed to add their collections, as have China's Zhejiang University, the Indian Institute of Science, the European Archives and Bibliotheca Alexandrina in Egypt.

The Role of RSS in Science Publishing

| Permalink

December's Issue of D-Lib Magazine brings and interesting article regarding the implication of RSS in the science and research publishing. The Role of RSS in Science Publishing is worth reading. Yet another practical example of how blogs have brought forth a tool that can change the nature of the web as it is traditionally known. Website are no longer the static domains, RSS helps the sites be distributed widely, most importantly as a two-way communication.

SCIENTISTS, CONSIDER WHERE YOU PUBLISH

| Permalink | 1 Comment

SCIENTISTS, CONSIDER WHERE YOU PUBLISH posits challenging issues every author of research papers should starting thinking about. It isn't simple any more to assume that the most prestigious journals are the best venue to publish your research. So what if you have published in a prestigious peer-reviewed journal and not many people can read what you have written due to its subscription cost? How long can this continue? Could this provide some incentive for scholars to publish in open access journals? What then? It is quiet possible that articles published in open access journals might be able to shift the focus of a discipline or a field of study because of their wider availability and accessibility.

Excerpt from the above mentioned article:
For scientists, publishing a paper in a respected peer-reviewed journal marks the culmination of successful research. But some of the most prestigious and soughtafter journals are so costly to access that a growing number of academic libraries can't afford to subscribe. Before submitting your next manuscript, consider a journal's access policy alongside its prestige - and weigh the implications of publishing in such costly periodicals. Two distinct problems continue to plague scientific publishing. First, institutional journal subscription costs are skyrocketing so fast that they outstrip the ability of many libraries to pay, threatening to sever scientists from the literature. Second, the taxpaying public funds a terrific amount of research in this country, and with few exceptions, can't access any of it. These problems share a common root - paid access to the scientific literature.

About the Potential of E-democracy

| Permalink

Very interesting thoughts and ideas. Certainly, in the past technology has been a great source of change; maybe the technologies of today that embody the concept of openness could initiate another socio-economical-political change across the globe.

About the Potential of E-democracy

Abstract
This paper develops a reflection on the potential of E-democracy to strengthen society's democratization exploring historically and technically the possibilities of cooperative organizations. From Singer's historical view about the rise of capitalism it is conjectured that Internet and E-democracy could be the technological innovations capable to trigger off the creation of a virtual network of cooperative organizations and thereby the development of a new economic system, based more on humanitarian values than the present ones.

presenting at ASIS&T 2004

| Permalink

Whoever is reading this, just to let you know that I will be presenting at the Annual ASIST&T Conference "ASIST 2004 Annual Meeting; "Managing and Enhancing Information: Cultures and Conflicts" (ASIST AM 04), " in Providence, RI, on November 16th, 2004, at 5:30p-7:00p.

As a part of a panel titled Diffusion of Knowledge in the Field of Digital Library Development: How is the Field Shaped by Visionaries, Engineers, and Pragmatists?, I’ll be “theorizing on the implication of open source software in the development of digital libraries”.

Will you be there?

Panel Abstract:
“Digital library development is a field moving from diversity and experimentation to isomorphism and homogenization. As yet characterized by a high degree of uncertainty and new entrants in the field, who serve as sources of innovation and variation, they are seeking to overcome the liability of newness by imitating established practices. The intention of this panel is to use this general framework, to comment on the channels for diffusion of knowledge, especially technology, in the area of digital library development. It will examine how different communities of practice are involved in shaping the process and networks for diffusion of knowledge within and among these communities, and aspects of digital library development in an emerging area of institutional operation in the existing library institutions and the specialty of digital librarianship. Within a general framework of the sociology of culture, the panelists will focus on the following broader issues including the engagement of scholarly networks and the cultures of computer science and library and information science fields in the development process and innovation in the field; involvement of the marketplace; institutional resistance and change; the emerging standards and standards work; the channels of transmission from theory to application; and, what 'commons' exist for the practitioners and those engaged with the theoretical and technology development field. The panelists will reflect on these processes through an empirical study of the diffusion of knowledge, theorizing on the implication of open source software in the development of digital libraries, and the standardization of institutional processes through the effect of metadata and Open Archive Initiative adoption.

The panel is sponsored by SIG/HFIS and SIG/DL”

BBC launches open-source video technology

| Permalink

From BBC launches open-source video technology:

The corporation has gone to great lengths to avoid any patent problems, and has used tried and tested techniques that have prior art. "We are reviewing the literature and will code round the problems as they arise."

To protect the software and the techniques used to develop it, the BBC has taken out its own defensive patents, said Davies, and is releasing the software under the Mozilla licence to ensure "that those patents are licensed for free, irrevocably, for ever."

The terms of the licence mean that Dirac could be used in open source software, said Davies, or in proprietary software in such a way that the company producing that software would not have to divulge their source code.

This is great news! Needless to say, this means fewer restrictions for innovation and development of new ideas and tools. The resulting ripple effect could encourage more open video communication because independent video producers will not have to carry the cost burden of their tools.

Open Source and Open Standards

| Permalink

Open Source and Open Standards provides a brief 'compare and contrast' between open source and open standards, and the pros and cons associated with each concept and practical implementations.

Genome Model Applied to Software

| Permalink

Genome Model Applied to Software:

Open-source developers attempting to reverse-engineer the mysteries of private networking software turn to genomics research. They're applying algorithms developed by biologists to decipher the secrets of closed networks.

Do Open Access Articles Have a Greater Research Impact?

| Permalink

This paper (Do Open Access Articles Have a Greater Research Impact?) reports its findings that "freely available articles do have a greater research impact. Shedding light on this category of open access reveals that scholars in diverse disciplines are both adopting open access practices and being rewarded for it."

The findings of this paper have just confirmed what seems to be an obvious argument: the more open the accessibility to articles is, the more they will be used, and thus they ought to have greater impact in research and practice.

An additional question that needs to be addressed in this context is the overall impact of articles published in open access journals. It is quiet possible that articles published in open access journals might be able to shift the focus of a discipline or a field of study because of their wider availability and accessibility.

Justice is served! Court: Grokster, StreamCast Not Liable

| Permalink

From Court: Grokster, StreamCast Not Liable:

"SAN FRANCISCO - Grokster Ltd. and StreamCast Networks Inc. are not legally responsible for the swapping of copyright content through their file-sharing software, a federal appeals court ruled Thursday in a blow to movie studios and record labels.
...
The panel noted that the software companies simply provided software for individual users to share information over the Internet, regardless of whether that shared information was copyrighted.
...
"The technology has numerous other uses, significantly reducing the distribution costs of public domain and permissively shared art and speech, as well as reducing the centralized control of that distribution," Thomas wrote"

Finally, justice is served!

Who benefits from the digital divide? is a very informative article regarding the digital divide discourse. One would think that such discourse arises with the aim to help the people on the have nots side of the digital divide, by closing the digital divide gap. In this article for First Monday Brendan Luyt shows that the people on the negative side of digital divide are surely NOT the people benefiting from the discourse.

"In this article I have described four groups that have an interest in the promotion of the digital divide issue. Information capital achieves a new market for its products as well as an educated workforce capable of producing those products in the first place. The state in the South benefits through the legitimation conferred through programs designed to combat the divide. Not only do these offer new accumulation opportunities for its elite, they also hold the possibility of defusing discontent over poor economic prospects for the middle class, a volatile section of the population. The development industry, suffering from a neo–liberal attack that views development as irrelevant in the modern world, also benefits from the digital divide. Another gap has been opened up that requires the expertise these agencies believe they can provide. And finally, the organs of civil society are also winners, as they attempt to capture information and communication technologies for their own increasingly successful projects."

Paradoxically, the digital divide discourse does not appear to be helping those it is supposed to help.

In The 'digital divide' and the rest of the population & the digital divide: more than a technological issue I have tried to show that the digital divide discourse might even further increase the existing digital divide gap.

Culture of secrecy hinders Africa's information society covers few interesting ways the mobile telephone technology is being used in Africa. It is evident in the article that the use of mobile technology is being redefined and continually socially constructed by the social and monetary resourced available.

Among the other interesting paragraphs, this one is really revealing:

"The worst thing is that it is a short step from a culture of withholding information to that of becoming information-blind. In other words, when we keep on withholding information, we end up being unable to produce information. We lose the culture of surveying, assessing, classifying – in brief, collecting as much information as possible and storing it in a standardized manner, making it available for use, not only to cater for current specific needs, but also for potential and future ones."

Along the lines of this article's argument, it can also be explained why text messaging is lagging in the US behind Europe and Asia. Most cell/mobile phone service plans in the US come with certain amount of 'free' minutes included in the plan. So, if you have free minutes to use, you use them first before sending any text messages, but also because the mobile telephone devices in the US market are less 'text messaging' friendly. In contrast, in Europe you pay for each minute you talk, and you use text messaging because it is cheaper than talking; thus the social co-construction of the mobile telephony service and the technology, and its use.

States Warn File-Sharing Networks quotes attorneys general of 40 US states as saying:

"In a letter to the heads of Kazaa, Grokster, BearShare, Blubster, eDonkey2000, LimeWire and Streamcast Networks, the attorneys general write that peer-to-peer (P2P) software "has too many times been hijacked by those who use it for illegal purposes to which the vast majority of our consumers do not wish to be exposed.""

There is no doubt that P2P networks are perhaps used for the distribution of copyrighted material. However, the problem with the argument that they could be shut because they are also used to distribute copyrighted material stands on shaky grounds.

Here are some issues with the argument:
- Why stop with the P2P Networks and P2P software? How about the Internet as the enabler of the P2P activities?
- P2P activities are also used by independent artists and other activist to distribute various materials without any copyright infringements
- Nobody seems to have a problem with physical CDs, video tapes, DVDs and other carrier technology (including roads and highways) as an enablers to carry content (copyrighted or otherwise) from point A to point B.

So, the issues on how to deal with the distribution of copyrighted materials should be looked from a different perspective. I think it is more of a social issue rather than technology. The P2P technology is an innovative way for content distribution and it will be very sad if it is destroyed because some people decide to use it in a manner contrary to the pertinent laws.

finding open source code

| Permalink

From IST Results - Swift searching for open source:

Excerpt:
Finding the open source code you need can often seem like searching for a needle in a haystack. But with the development of the AMOS search engine finding your way through today’s maze of software code has just become considerably easier.
Aimed at programmers and system integrators but with the potential to be used by a broader public, the AMOS system applies a simple ontology and a dictionary of potential search terms to find software code, packages of code and code artefacts rapidly and efficiently. In turn it assists open source program development through making the building blocks of applications easier to find and re-use.

introducing the Common Information Environment

| Permalink

From Towards the Digital Aquifer: introducing the Common Information Environment:

Excerpts:
Google [1] is great. Personally, I use it every day, and it is undeniably extremely good at finding stuff in the largely unstructured chaos that is the public Web. However, like most tools, Google cannot do everything. Faced with a focussed request to retrieve richly structured information such as that to be found in the databases of our Memory Institutions [2], hospitals, schools, colleges or universities, Google and others among the current generation of Internet search engines struggle. What little information they manage to retrieve from these repositories is buried among thousands or millions of hits from sources with widely varying degrees of accuracy, authority, relevance and appropriateness.
...
This is the problem area in which many organisations find themselves, and there is a growing recognition that the problems are bigger than any one organisation or sector, and that the best solutions will be collaborative and cross-cutting; that they will be common and shared. The Common Information Environment (CIE) [3] is the umbrella under which a growing number of organisations are working towards a shared understanding and shared solutions.

socio-political and economical twist to open source

| Permalink

Personal view: Open source may be next business revolution reviews the new book "The Success of Open Source" by Stevens Weber, a professor of political science at the University of California at Berkeley.

Have not read this book yet, but it seems like interesting reading from this article. Here are some excerpts:

"His claim, and it's a bold one, is that this isn't just a good way of developing software, it's a new way of organising businesses. Open-source software breaks the links between developing a product and owning a product, which is the way business has traditionally organised itself. That could have startling consequences.
It's rare to find a professor of politics discussing software. "People in academic subjects are very conservative about their disciplines," Weber says. "So people are intrigued, but also a little bit nervous about an approach like this."

"Think back to the invention of the steam engine. By the standards of the time, building a railway was so complicated and so costly that none of the existing organisational forms could handle it. So the joint-stock company and the stock exchange rose to prominence. Something similar may be happening now."

accessing the "collective intelligence"

| Permalink

Commenting on George Por's article, Steven Cohen discusses the value of blogging and other tools supporting collaboration in building a collective intelligence.

While we have many blogging and other social software tools that enable the 'creation' of the collective, how do we harness the "collective intelligence" once it is 'there'/'built'? It would seem that other tools would be needed to enable quick and relevant utilization of the collective intelligence. So far, it appears that the blogging tools have done a great job enabling the representation of the collective intelligence. They lack the function as enablers for utilizing the available collective knowledge.

It seems that the next wave of social network and collaboration tools will/should concentrate more on the function of finding relevant and appropriate 'intelligence' somewhere in the collective pool. Needless to say that search engines are not best suited for this type of activity since they concentrate primarily on topical relevance and do little to nothing about spatial, temporal, methodological, contextual, process, and task specific relevance.

Alan Kay's food for thought regarding personal computing

| Permalink

Alan Kay's food for thought as reported in A PC Pioneer Decries the State of Computing, regarding personal computing:

But I was struck most by how much he thinks we haven't yet done. "We're running on fumes technologically today," he says. "The sad truth is that 20 years or so of commercialization have almost completely missed the point of what personal computing is about."

But what about all those great things he invented? Aren't we getting any mileage from all that? Not nearly enough, Kay believes. For him, computers should be tools for creativity and learning, and they are falling short. At Xerox PARC the aim of much of Kay's research was to develop systems to aid in education. But business, instead, has been the primary user of personal computers since their invention. And business, he says, "is basically not interested in creative uses for computers."

Note the emphasis that computers could/should have been used more for creative process and learning. The potential is there, however, the social construction of the computing technologies has been mostly lead by commercial goals. Thus, the interplay of computing technology and social structures has mostly served commercial interest and less so with the potential of creativity, inventions and innovation.

The question arises then how to get to more creative use of technology for learning and novel ways of innovations? Open source computing perhaps, where computing tools geared more towards learning that act as stimuli for creative innovation. But then, anything creative that can make money is imprisoned within the commercial realm and looses it potential for learning and creativity. A way needs to be found such that creativity is left to bloom within its realm free from commercialization. Proprietary software (due to being in closed environment) is responsible for slowing down innovation and creativity. I would say: the way is towards open computing …

open access a danger to professional societies?

| Permalink | 1 Comment

This is a follow-up to my previous entry (A shift in scholarly attention? From commercial publishing to open access publishing) prompted by Open Access? Some Sparks Fly at ALA. (thanks to Open Access News).

In the article, IEEE's Durniak makes the following unsubstantiated statement: "Free open access runs the risk of destroying professional societies."

One can do an extensive analysis to show that the above statement is not necessarily true. However, it suffices to note that commercial publishers are only one of the actors in the scholarly publishing cycle. As such, the totality of the functions performed by the commercial publishers can definitely be taken over by the professional societies themselves, or perhaps by a non-profit umbrella organization that would deal with scholarly publishing for various professional societies.

It is really unprecedented and uncalled for the commercial publishers to claim that without them the entire scholarly publication process will fail and that professional societies will be destroyed. It is indeed true that the commercial publishers provide value added services. However, none of these value-added services are outside of the competency of the professional societies themselves, especially with all the open source software available. Even if it means that the processional societies would have to hire IT staff to deal with the maintenance of the process, it would definitely be less costly than the cost to the host institution for buying back the intellectual output of their staff.

Sooner or later, the commercial publishers will have to relax a bit and see how they can honestly contribute in the process to moving to open access. Their stakeholders might not be happy, but, hey, the dynamic is changing and the power base is shifting.

Can it ever get more clearer than this argument why the publishing of scholarly work should not be in the hands of commercial entities? From A Quiet Revolt Puts Costly Journals on Web:

"Elsevier doesn't write a single article," said Dr. Lawrence H. Pitts, a neurosurgeon at the University of California at San Francisco and chairman of the faculty senate of the 10-campus system. "Faculty write the articles for them, faculty review the articles for them and faculty mostly edit the journals for them, and then we get to buy the journals back from a company that makes a very large profit."

It appears that the players in the process of scholarly publishing (scholars, editors, publishers, etc.) are well aware that the current (i.e. commercial publishing) process will not be sustainable for long. Fueled by the openness of the Internet, scholars and academics have the necessary technology and expertise to publish without the involvement of commercial entities. The money that today is eaten as profit by the commercial entities can definitely be used for further research and academic pursuits.

In the process of the inevitable move from commercial publishing to open access, undoubtfully the entire dynamic of the publishing process will change. But change is not bad. A lot of realignments will occur. The moment established scholars start publishing in open access publications, the tide will turn.

Or, if there is resistance, a shift in the problems addressed by a certain filed or a discipline might occur towards those addressed in the open access journals due to their wider distribution and open access. It would appear then that the move towards open access publishing might even realign the types of problems addressed by a certain scholarly community.

An important analysis in this respect is presented by Kling et al. It suggests that the medium of information transfers and exchange (paper vs. electronic) might induce a shift in the scholarly discourse of a particular discipline. They argue that the highest status scientists usually publish in well-established journals that at the same time usually define the scope and the problems of the field (Kling et al., p.10). Then, the scientists and scholars with a status just a little under the scholars of the highest status are likely to publish in an e-journal (usually open access) due to its speed of distribution and perhaps visibility due to very large readership (Kling et al., p.10). What this could do is that if enough second tier scientist start publishing in e-journals sooner or later the interests and the problems treated in those e-journals for a particular discipline might shift away from the problems treated in the paper journals, due to the speed of distribution, while gaining legitimacy and perception of good quality. This would also mean that the medium is the message (in McLuhan’s sense) where the medium appears to shift the scholarly discourse of a field/discipline.

Kling, R. and Covi, L. M. (1995). Electronic Journals and Legitimate Media in the Systems of Scholarly Communication, The Information Society, 11 (4) 261-271 (Accessed at: http://www.slis.indiana.edu/TIS/articles/klingej2.html)

E-voting: Nightmare or actual democracy?

| Permalink

The public domain discourse surrounding e-voting is very perplexing. Similarly to other articles, E-voting: Nightmare or nirvana? questions the security of e-voting systems and their viability for use in real elections.

"Once the province of a small group of election officials and equipment sellers, e-voting has exploded into the popular consciousness because of a spreading controversy over security and verifiability. Thanks to a concerted effort by opponents and to the missteps of voting machine vendor Diebold Election Systems, most of the news has been bad."

I have said this before in a previous entry (secure enough for consumerism, not good enough for voting?!) and here it is again: How is it that we can't trust e-voting security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?

Secondly, the missteps by Diebold Election Systems that produces e-voting machines are curable by the use of open source e-voting systems that are already in use in other places around the world.

Yes, there are potential problems with e-voting systems. These are the same issues that trouble all new technologies in the appropriation phase by the users. However, to claim that these issues are worse than those that troubled and still trouble e-commerce systems is absurd.

From Open access jeopardizes academic publishers, Reed chief warns:

"The rise of open access publishing of scientific research could jeopardise the entire academic publishing industry, according to the chief executive of Reed Elsevier, the world's largest publisher of scientific journals."

Something will be jeopardized for certain, but it isn't the academic publishing, it is the commercial publishing. As many open access journals and publishing venues have shown, academic publishing does not have to be commercial publishing.

Bo-Christer Björk: Open access to scientific publications - an analysis of the barriers to change?:

Abstract:
"One of the effects of the Internet is that the dissemination of scientific publications in a few years has migrated to electronic formats. The basic business practices between libraries and publishers for selling and buying the content, however, have not changed much. In protest against the high subscription prices of mainstream publishers, scientists have started Open Access (OA) journals and e-print repositories, which distribute scientific information freely. Despite widespread agreement among academics that OA would be the optimal distribution mode for publicly financed research results, such channels still constitute only a marginal phenomenon in the global scholarly communication system. This paper discusses, in view of the experiences of the last ten years, the many barriers hindering a rapid proliferation of Open Access. The discussion is structured according to the main OA channels; peer-reviewed journals for primary publishing, subject-specific and institutional repositories for secondary parallel publishing. It also discusses the types of barriers, which can be classified as consisting of the legal framework, the information technology infrastructure, business models, indexing services and standards, the academic reward system, marketing, and critical mass."

Open Source as competitive Weapon

| Permalink

Note how in the passage below (from Open Source as Weapon) the argument is made that the competition soon will move away from the actual code (everyone would have access to the same software code) and into its usage and integration in a particular context.

Excerpt:
"Experts tick off compelling reasons why a vendor of closed-source software might release code: to make the product more ubiquitous, speed development, get fresh ideas from outside the company, to complement a core revenue stream, foster a new technology -- and to stymie a competitor.

In fact, giving away some free company IP can go a long way toward making someone else's IP worth beans.

Martin Fink, author of "The Business and Economics of Linux and Open Source," notes that, while all commercial software decreases in value over time, open source drastically speeds the process. The huge community of developers working together can produce a competitive open source product fast, and they'll add features for which a closed-source vendor would want to charge extra.

Finally, customers can acquire the software at no cost, even though they may pay for customization, integration and support."

BBC to Open Content Floodgates

| Permalink

BBC to Open Content Floodgates:

Excerpt:
"The British Broadcasting Corporation's Creative Archive, one of the most ambitious free digital content projects to date, is set to launch this fall with thousands of three-minute clips of nature programming. The effort could goad other organizations to share their professionally produced content with Web users.

The project, announced last year, will make thousands of audio and video clips available to the public for noncommercial viewing, sharing and editing. It will debut with natural-history programming, including clips that focus on plants, animals and birds."

SEMANTIC WEB DRAWS ON THE POWER OF FRIENDS

| Permalink

(via ShelfLife, No. 160 (June 10 2004))
SEMANTIC WEB DRAWS ON THE POWER OF FRIENDS
"Do a little digging into the status of the Semantic Web, and you'd likely come away befuddled and unenlightened, convinced this was a job for techno-geeks, not actual human beings. But in point of fact, the burgeoning number of Weblogs already form a vast source of richly interconnected information that requires little or no knowledge of the Semantic Web in order to be useful. The new Friend Of A Friend (FOAF) project is taking the idea of Weblog communities one step further by explicitly defining them in a way that is more easily machine processible. One of the aims of the FOAF project is to improve the chances of happy accidents by describing the connections between people (and the things that they care about such as documents and places). The idea is to use FOAF to describe the sorts of things you would put on your homepage -- your friends, your interests, your picture -- in a structured fashion that machines find easy to process. What you get from this is a network of people instead of a network of Web pages. When people need to know something that is outside their area of expertise, these personal contacts serve as a way of linking them to the best information available. (FreePint 27 May 2004) http://www.freepint.com/issues/270504.htm#feature"

my comments on Thijs' Predictions

| Permalink

In Prediction Thijs van der Vossen has stated some ideas about how things will be in the future in terms of information and knowledge sharing.

While I agree that what Thij's writes is the desired outcome if we are doing towards a more open world, the outcome is not necessarily so. Yes, information needs to be free so it can be accessed from everywhere, by everyone, through many different devices and access methods. However, the assumption is that the corporate entities will be willing to let go the grip they have on everything information that looks profitable.

So, one of the fundamental assumptions is that all sources of information and knowledge artifacts really want to share their content. In the open source Internet as a possible antidote to corporate media hegemony I have argued that the property of openness (open content and open communication) as a fundamental property of the Internet as we know it today, is perhaps the reason why Thij's predictions look very probably. Hopefully no authoritative entity puts restrictions around what can be said and done online.

Papers on the Information (Commons) Society

| Permalink

Openness, Publication, and Scholarship

| Permalink

Openness, Publication, and Scholarship is an interesting philosophical perspective attempting to frame publications and scholarship within the various concepts of openness such as "open access", "open data", "open source", "open entry", and "open discourse".

To this I like to modify "open data" with "open content", since content has broader scope than data, and perhaps add "open communication" as the functional link between "open access" and "open discourse".

Well, at least many research institutions are realizing that the commercial publishers might not be the solution for the future of scholarly communication.

An excerpt from Fat Cat Publishers Breaking the System:

"Out-of-control costs for scholarly publications have fueled new digital repository initiatives

The scholarly publishing system is broken. At research universities everywhere, scholarly work—in the form of articles, books, editing, reviewing of manuscripts—is handed over to commercial publishers, only to be bought back by the libraries at huge cost. Libraries scramble to judiciously stretch shrinking budgets for growing runs of books and journals—books and journals that are critical to the research and teaching activities of the university’s faculty who, as authors and editors, contribute so generously to the publishers who sell them. The arrangement is bankrupting research library budgets and swelling the profit margins of commercial publishers.

Sadly, commercial publishing threatens the very system it exists to support. When expensive commercially published materials cannot be bought, when university presses cannot afford to publish monographs for junior faculty, everyone suffers. Students and scientists cannot gain access to badly needed materials; scholars cannot get tenure for lack of that first published monograph. The modern university, modeled on the ideal of the Greek temple where thinkers and learners pursued knowledge so that society could reap its benefits, is losing ground to crass commercialism. At risk is the very culture of the academy."

From File-sharing to bypass censorship:

"By the year 2010, file-sharers could be swapping news rather than music, eliminating censorship of any kind."
...
"Currently, only news that's reckoned to be of interest to Americans and Western Europeans will be syndicated because that's where the money is," he told the BBC World Service programme, Go Digital.
"But if something happens in Peru that's of interest to viewers in China and Japan, it won't get anything like the priority for syndication.

Well, hope it does not come to this because of some political decisions. However, media corporations care only about their bottom line. Thus, who cares if there is censorship due to political decisions or due to media's profit making strategies? In any case, the open content and open communication enabled by the internet seems to be our guard (to a certain degree) against censorship.

secure enough for consumerism, not good enough for voting?!

| Permalink

In the past year or so we have seen various attempts to online voting just to see them scrapped because they are not secure enough. Pentagon Drops Plan To Test Internet Voting is the latest report on such initiative stating that "The Pentagon (news - web sites) has decided to drop a $22 million pilot plan to test Internet voting for 100,000 American military personnel and civilians living overseas after lingering security concerns, officials said yesterday."

How is it that we can't trust security because voting would be done over the Internet, when the same Internet is used for millions of dollars in daily transactions between consumers and companies and business-to-business? The same Internet is secure enough for commerce and can be trusted with billions of dollars. Yet, it is not secure enough for voting?

Something is wrong … perhaps the following explains it (from the same article): "The American pullback is in direct contrast to Europe, where governments are pursuing online voting in an attempt to increase participation. The United Kingdom, France, Sweden, Switzerland, Spain, Italy, the Netherlands and Belgium have been testing Internet ballots."

Ref: Media Control: Open communication technologies as actors enabling a shift in the status quo

google's personalized 'jewel'

| Permalink

Google does it again. Like with many of the practical implementations in the search world, Google is first again. First in implementing it in real world, not necessarily in research. As far as research is concerned, personalized searches have been discussed plenty.

This new personalized web search by Google utilizes facet aided searches.

The entire search is dynamic. Once you setup the profile, very simple and menu/directory driven, the left side shows the built query. You can still type a search term. The FAQ shows a bit how things supposed to work.

In any case, the search is operational (beta) and once the relevant docs are returned, there is a small sliding bar that can be moved left-right in order to dynamically relax-restrict the personalization.

Interesting stuff! Just when you think you have learned how Google works! :)

Now, all other search engines would try to do the same. Why don't they start something before Google does it for a change?! What are they afraid off?

(thanks to unstruct.org for the link)

US societies back expanded free access to research

| Permalink

From US societies back expanded free access to research, courtesy of scidev.net:

Excerpt:
"A substantial number of the United States' leading medical and scientific societies have declared their support for free access to research under certain circumstances — including access by scientists working in low-income countries.

In a statement released this week in Washington DC, 48 not-for-profit publishers, representing more than 600,000 scientists and clinicians and more than 380 journals, pledge their support for a number of forms of free access."

The push is on to shelve part of the Patriot Act

| Permalink

From The push is on to shelve part of the Patriot Act:

Excerpt:
"Discontent about Section 215 has been smoldering; 253 cities and towns across the country have passed nonbinding resolutions expressing opposition to it. It flamed up last month when the American Booksellers Association, the American Library Association, and the writers group PEN American Center announced a drive to collect a million signatures in support of several bills pending in Congress to amend the law. The campaign is supported by a who's-who of publishers, booksellers, and library organizations, including the Barnes & Noble and Borders bookstore chains, publishers Random House and Simon & Schuster, the American Association of Law Libraries, and the Authors Guild."

“The conditions associated with a particular class of conditions of existence produce habitus, systems of durable, transportable dispositions, structured structures predisposed to function as structuring structures, that is, as principles which generate and organize practices and representations that can be objectively adapted to their outcomes without presupposing a conscious aiming at ends or an express mastery of the operations necessary in order to attain them. Objectively ‘regulated’ and ‘regular’ without being in any way the product of obedience to rules, they can be collectively orchestrated without being the product of the organizing actions of a conductor” (Bourdieu, p. 53)

The above quote by Bourdieu, when viewed from the perspective of the society as the ‘habitus’, is quiet informing (in theory as well as in practice) of media’s interplay with the social structures within which they are embedded.  As we have seen throughout our course readings, media technologies—as important instruments at various levels of communication processes in the society, have encountered resistance by various cultural and social norms, and somewhat mixed response from economical and political forces because of their profit making potentials or power generation ability. More then any other type of technology, media and communication technologies have been the subject of public and scholarly debates because of their intrinsic characteristics to be able to convey (asynchronously) content across time and space (at distance), inscribed in form of data, information, images, knowledge, and wisdom, in mediums such as books, data tape drives, CD-ROMS, video and audio tapes, etc. Additionally, synchronous communication has enabled instantaneous communication among people (e.g. telephone, audio and video conferencing, online chat) enabling efficient, but not necessarily effective exchange of information, ideas, thoughts, and concepts.

The pervasive and widespread use of media technologies, often used ubiquitously for symbolic purposes, is also used by the governing elites to maintain the status quo and ensure stability. The necessity to reproduce and maintain a stable state, the habitus (to borrow from Bourdieu whose habitus concept is similar to the stable state produced and maintained by the hegemonic ideology), requires ways for disseminating cultural and political material of the dominant ideology. Similarly to how Bourdieu describes the functioning of the habitus, Gitlin defines the status quo as hegemony, “a ruling class’s (or alliance’s) domination of subordinate classes and groups through the elaboration and penetration of ideology (ideas and assumptions) into their common sense and everyday practice,” and contends that it “is systematic (but not necessary or even usually deliberate) engineering of mass consent to established order” (Gitlin, 1980, pp. 253). Further, elaborating on the aspect of hegemony and clarifying the composition of the elite, mostly government, corporate establishment and those institutions that produce cultural artifacts, Schiller (1996) explains their economic reason for cooperation: “The American economy is now hostage to a relatively small number of giant private companies, with interlocking connections, that set the national agenda. This power is particularly characteristic of the communication and information sector where the national cultural-media agenda is provided by a very small (and declining) number of integrated private combines. This development has deeply eroded free individual expression, a vital element of a democratic society” (Schiller, 1996, p. 44).

This paper will attempt to elaborate on the interplay between media and communication technologies, and social structures and forces (social, cultural, economical, political), whether institutionalized or not, emphasizing that both the content and the channels of communication through which the content is distributed are important factors in the production, maintenance and further reproduction of the artifacts of the dominant ideology. I will argue that the content that is being represented and recorded, when conveyed via open communication (such as the Internet), can show us the liberating potentials of various media technologies. As such, communication technologies are situated as important actors in the process to displacing or shifting the status quo.

open content, open communication everywhere!

| Permalink

From Copyright Doesn't Cover This Site:

"To prove that open sourcing any and all information can help students swim instead of sink, the University of Maine's Still Water new media lab has produced the Pool, a collaborative online environment for creating and sharing images, music, videos, programming code and texts. "
...
"We are training revolutionaries -- not by indoctrinating them with dogma but by exposing them to a process in which sharing culture rather than hoarding it is the norm," said Joline Blais, a professor of new media at the University of Maine and Still Water co-director.
...
"It's all about imagining a society where sharing is productive rather than destructive, where cooperation becomes more powerful than competition," Blais said.

Who Owns The Facts?

| Permalink

Who Owns The Facts?

(courtesy of slashdot)
Quote:
"windowpain writes "With all of the furor over the Patriot Act a truly scary bill that expands the rights of corporations at the expense of individuals was quietly introduced into congress in October. In Feist v. Rural Tel. Serv. Co. the Supreme Court ruled that a mere collection of facts can't be copyrighted. But H.R. 3261, the Database and Collections of Information Misappropriation Act neatly sidesteps the copyright question and allows treble damages to be levied against anyone who uses information that's in a database that a corporation asserts it owns. This is an issue that crosses the political spectrum. Left-leaning organizations like the American Library Association oppose the bill and so do arch-conservatives like Phyllis Schlafly, who wrote an impassioned column exposing the bill for what it is the week after it was introduced."

Open source genetics needed to feed the world

| Permalink

From Open source genetics needed to feed the world:

"This week Australian genetics pioneer Richard Jefferson was recognised by Scientific American, the prestigious international science magazine, as one of the 50 global technology leaders of 2003."

"His latest inventions could unleash a new Green Revolution, giving farmers, researchers and agriculture businesses across the world access to the potential of modern genetics."

"And he’s calling on the global biotechnology community to adopt open access genetics – freeing up the tools of modern genetics and biology from the shackles of excessive patenting."

(my emphasis in bold)

quality open source research resources

| Permalink

Free / Open Source Research Community presents quality open source research resources, and it is one itself as a result of having collected and organized these research articles.

A must read site for those interested in the interplay of open source software as an actor in the complex network of this thing we call society.

Self-Archive unto others . . . - reviewed by Stevan Harnad

| Permalink

From Self-Archive unto others . . . - reviewed by Stevan Harnad:

Quotes:
"Yet barriers do deny access to research papers. Tolls – in the form of journal subscription and license fees – must be paid by researchers’ universities for access to the journals in which the research is published. Yet the authors would much prefer it if there were no tolls at all, so that all would-be readers could use their research, and thereby maximize its impact."
...
"If and when the subscribing universities are no longer paying to access the research output of other universities, they will easily be able to pay publishers the peer-review costs for their own research output out of only a third of their annual windfall toll-savings. That way, the essential costs get paid and the research is all openly accessible. And all it needs to make it happen is reciprocal self-archiving by universities, according to the Golden Rule: self-archive unto others as ye would have them self-archive unto you. Universities and research-funders should extend the “publish or perish” mandate to “publish with maximal access” (by self-archiving: http://www.ecs.soton.ac.uk/~harnad/Temp/che.htm)."

social impacts of high-speed internet

| Permalink

In Broadband net user numbers boom BBC reports on the growing number of broadband (i.e. high-speed) Internet connections at home.

What does it mean? Well, according to Pew Internet Project there is an apparent and substantial difference in social behavior that varies depending on whether you are connected from home via broadband or just plain dial-up connection. The summary of their findings as well as the full report can be found at: The Broadband Difference. They have also published a follow-up report.

Open Archives Initiative (OAI) Online Tutorial

| Permalink

Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) Turorial

Quote:
"The essence of the open archives approach is to enable access to Web-accessible material through interoperable repositories for metadata sharing, publishing and archiving. It arose out of the e-print community, where a growing need for a low-barrier interoperability solution to access across fairly heterogeneous repositories lead to the establishment of the Open Archives Initiative (OAI). The OAI develops and promotes a low-barrier interoperability framework and associated standards, originally to enhance access to e-print archives, but now taking into account access to other digital materials. As it says in the OAI mission statement "The Open Archives Initiative develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content."

So WIPO, why did you scrap the Open Source meeting?

| Permalink

The Register asks rather the obvious question: So WIPO, why did you scrap the Open Source meeting?

"WIPO is an international organisation dedicated to promoting the use and protection of works of the human spirit. These works - intellectual property - are expanding the bounds of science and technology and enriching the world of the arts. Through its work, WIPO plays an important role in enhancing the quality and enjoyment of life and helps create real wealth for nations."

Good so far ... and then ...

"Given its background and mandate it is surprising that it scrapped its first meeting on "open and collaborative" projects such as "open source software." After all open source software does, indeed rely on intellectual property rights. It cannot exist without them. It is, therefore, bemusing that the US Director of International Relations for the US Patent and Trademark Office apparently opposed such a meeting, claiming that such a meeting would run against the mission of WIPO to promote intellectual property rights. At least one of the major US software companies, probably beginning with the letter "M", is reported to have lobbied against the holding of such a meeting."

No comments...

e-voting systems must be open source

| Permalink

Back in July, prompted by the NewScientist.com's article E-voting system flaws 'risk election fraud' reporting that Diebold Election Systems's e-voting system contains certain flaws that 'risk election fraud', I said I would be more comfortable e-voting if such system is open source where the code is open for public scrutiny.

Well, the Wired.com reports (Aussies Do It Right: E-Voting, also commented by slashdot) that an Australian company has done just that for the Australian election:

"While critics in the United States grow more concerned each day about the insecurity of electronic voting machines, Australians designed a system two years ago that addressed and eased most of those concerns: They chose to make the software running their system completely open to public scrutiny."

"Although a private Australian company designed the system, it was based on specifications set by independent election officials, who posted the code on the Internet for all to see and evaluate. What's more, it was accomplished from concept to product in six months. It went through a trial run in a state election in 2001."

Rep. Rush Holt's bill seems a step in the right direction for the US:

"The issues of voter-verifiable receipts and secret voting systems could be resolved in the United States by a bill introduced to the House of Representatives last May by Rep. Rush Holt (D-New Jersey). The bill would force voting-machine makers nationwide to provide receipts and make the source code for voting machines open to the public. The bill has 50 co-sponsors so far, all of them Democrats."

Trends in Self-Posting of Research Material Online by Academic Staff:

From the introduction:

"With the rapid uptake of digital media changing the way scholarly communication is perceived, we are in a privileged position to be part of a movement whose decisions now will help to decide ultimately future courses of action. A number of strategies have recently emerged to facilitate greatly enhanced access to traditional scholarly content, e.g. open access journals and institutional repositories."

The Digital Imprimatur

| Permalink

How big brother and big media can put the Internet genie back in the bottle

The Digital Imprimatur (via Open Access News):

John Walker, The Digital Imprimatur, September 13, 2003 (revised October 9). The co-founder of Autodesk pulls together the grounds for pessimism about the future of the openness of the internet. Excerpt: With the advent of the internet "[i]ndividuals, all over the globe, were empowered to create and exchange information of all kinds, spontaneously form virtual communities, and do so in a totally decentralised manner, free of any kind of restrictions or regulations....Indeed, the very design of the Internet seemed technologically proof against attempts to put the genie back in the bottle....Earlier I believed there was no way to put the Internet genie back into the bottle. In this document I will provide a road map of precisely how I believe that could be done, potentially setting the stage for an authoritarian political and intellectual dark age global in scope and self-perpetuating, a disempowerment of the individual which extinguishes the very innovation and diversity of thought which have brought down so many tyrannies in the past."

more nonsense: SCO attacks GPL

| Permalink | 1 Comment

I think this is the most irrational nonsense yet to come out of the SCO camp. SCO attacks open-source foundation reports on SCO as stating:

"The GPL violates the U.S. Constitution, together with copyright, antitrust and export control laws," SCO Group said in an answer filed late Friday to an IBM court filing. In addition, SCO asserted that the GPL is unenforceable.

Are they (SCO folks) out of their mind? When did it become a violation (of any sort) to share for free your knowledge, expertise and any other product that may derive from it?

Such sharing could certainly reduce the profits of commercial companies when the open source products in question are Linux, Apache, OpenOffice, etc. But, how does that violate the "U.S. Constitution, together with copyright, antitrust and export control laws"?

Apparently SCO is going for all or nothing, and this route they have taken will get them faster to nothing.

social software - what's in the name?

| Permalink

I've come across few various sites and some articles (blog entries, etc.) talking about social software. The phrase does sound interesting and the name (i.e. social software) appears to promise much more than what actually happens to be.

For example, in iCan for the Public the folks over at Many2Many state:

"The BBC's iCan is in public pre-beta, a social software project to foster social capital and democratic participation. I posted on M2M about the project back in May. (Just a little before that we were having the same power-law inspired discussion of weblog modalities we are today)."

After reviewing the iCan site, it appears to be a collaborative tool/portal where people from the UK can share personal opinions and learn from each other. A clear statement is made at the site that iCan can't be used for commercial purposes.

The common denominator of the tools termed 'social software' seems to be the ability to facilitate open collaboration among the publics or users of such software with the 'publishers/moderators' playing a facilitating role. According to this I would contend that a wide range of software packages that support collaboration have the potential to be used in a way that makes them 'social software'. For example, any software such as mailing lists managers, CMS/portals, blogging software, etc., fit the pattern. However, it is their use that makes them 'social software’ or not. Needless to say, those collaborative software packages that do not support open communication and sharing of ideas and thoughts can't be considered 'social software'.

information science in Directory of Open Access Journals

| Permalink | 1 TrackBack

I just came across the Directory of Open Access Journals and was amazed at the number of open access peer-reviewed Library and Information Science journals. The "Directory of Open Access Journals ... covers free, full text, quality controlled scientific and scholarly journals ... [with the] aim to cover all subjects and languages."

Open Source Everywhere - not just in software

| Permalink | 1 Comment | 1 TrackBack

Open Source Everywhere by Wire's Thomas Goetz.

A must read article elaborating and explaining various aspects of the open source philosophy most widely apparent and spread in software development.

"We are at a convergent moment, when a philosophy, a strategy, and a technology have aligned to unleash great innovation. Open source is powerful because it's an alternative to the status quo, another way to produce things or solve problems. And in many cases, it's a better way. Better because current methods are not fast enough, not ambitious enough, or don't take advantage of our collective creative potential."

Check these open source efforts mentioned in the arrticle:

  • OPEN SOURCE FILM
  • OPEN SOURCE RECIPES
  • OPEN SOURCE Π
  • OPEN SOURCE PROPAGANDA
  • OPEN SOURCE CRIME SOLVING
  • OPEN SOURCE CURRICULUM

Some quotes:

"Software is just the beginning … open source is doing for mass innovation what the assembly line did for mass production. Get ready for the era when collaboration replaces the corporation."

"But software is just the beginning. Open source has spread to other disciplines, from the hard sciences to the liberal arts. Biologists have embraced open source methods in genomics and informatics, building massive databases to genetically sequence E. coli, yeast, and other workhorses of lab research. NASA has adopted open source principles as part of its Mars mission, calling on volunteer "clickworkers" to identify millions of craters and help draw a map of the Red Planet. There is open source publishing: With Bruce Perens, who helped define open source software in the '90s, Prentice Hall is publishing a series of computer books open to any use, modification, or redistribution, with readers' improvements considered for succeeding editions. There are library efforts like Project Gutenberg, which has already digitized more than 6,000 books, with hundreds of volunteers typing in, page by page, classics from Shakespeare to Stendhal; at the same time, a related project, Distributed Proofreading, deploys legions of copy editors to make sure the Gutenberg texts are correct. There are open source projects in law and religion. There's even an open source cookbook."

"Of course, for all its novelty, open source isn't new. Dust off your Isaac Newton and you'll recognize the same ideals of sharing scientific methods and results in the late 1600s (dig deeper and you can follow the vein all the way back to Ptolemy, circa AD 150). Or roll up your sleeves and see the same ethic in Amish barn raising, a tradition that dates to the early 18th century. Or read its roots, as many have, in the creation of the Oxford English Dictionary, the 19th-century project where a network of far-flung etymologists built the world's greatest dictionary by mail. Or trace its outline in the Human Genome Project, the distributed gene-mapping effort that began just a year before Torvalds planted the seeds of his OS."

fighting information pollution

| Permalink

Web guru fights info pollution:

"The entire ideology of information technology for the last 50 years has been that more information is better, that mass producing information is better," he [Jakob Nielsen] says.

If you are a company somehow related to the management and manipulation of information, certainly more information is better. However, this does not say much about the quality of life, and not much about the quality of information.

"The fix for information pollution is not complex, but is about taking back control your computer has over you."

This is a very profound philosophical statement; certainly not everyone believes that there is a control we have to take from the computers. Just how do we go about tacking back the control anyway? I'm not saying that this is not possible, it is just now easy due to many factors, and one of them being that not everyone believes there is a control to be taken back. As in any solution to a potential problem, one of the most important things in the process of discovering the solution is the ability to diagnose the problem properly. In the case of the information pollution, contextually diagnosing the root of the problem might turn out to be the hardest task.

Why PLoS Became a Publisher (Vol 1, Issue 1)

| Permalink

Public Library of Science (PLoS) has finally published their first issue, Vol 1, Issue 1. Especially interesting is their first article/editorial Why PLoS Became a Publisher that provides the rationale for the open access to scholarly and scientific literature.

Quote:

"PLoS Biology, and every PLoS journal to follow, will be an open-access publication–everything we publish will immediately be freely available to anyone, anywhere, to download, print, distribute, read, and use without charge or other restrictions, as long as proper attribution of authorship is maintained. Our open-access journals will retain all of the qualities we value in scientific journals—high standards of quality and integrity, rigorous and fair peer-review, expert editorial oversight, high production standards, a distinctive identity, and independence."

The Beginning of the End of the Internet?

| Permalink

From The Beginning of the End of the Internet?:

"The Internet as we know it is at risk. Entrenched interests are positioning themselves to control the network's chokepoints and they are lobbying the FCC to aid and abet them. The Internet was designed to prevent government or a corporation or anyone else from controlling it. But this original vision of the Internet may soon be lost. In its place a warped view that open networks should be replaced by closed networks and that accessibility can be superceded by a new power to discriminate is emerging."

Scary thoughts.... but indeed very real...

Democratizing software: Open source, the hacker ethic, and beyond

Abstract:
"The development of computer software and hardware in closed-source, corporate environments limits the extent to which technologies can be used to empower the marginalized and oppressed. Various forms of resistance and counter-mobilization may appear, but these reactive efforts are often constrained by limitations that are embedded in the technologies by those in power. In the world of open source software development, actors have one more degree of freedom in the proactive shaping and modification of technologies, both in terms of design and use. Drawing on the work of philosopher of technology Andrew Feenberg, I argue that the open source model can act as a forceful lever for positive change in the discipline of software development. A glance at the somewhat vacuous hacker ethos, however, demonstrates that the technical community generally lacks a cohesive set of positive values necessary for challenging dominant interests. Instead, Feenberg’s commitment to "deep democratization" is offered as a guiding principle for incorporating more preferable values and goals into software development processes."

Factors of regional/national success in information society

| Permalink

Factors of regional/national success in information society developments: Information society strategies for candidate countries

Abstract:

"Bread or Broadband? The thirteen candidate countries (CCs) for entry into the European Union in 2004 (or beyond) confront difficult choices between "Bread or Broadband" priorities. The question raised in this article is how to put Information Society (IS) policy strategies at the service of social welfare development in these countries, while optimizing their resources and economic output.

The article summarises a dozen original research studies, conducted at the European Commission’s Institute for Prospective Technology Studies (IPTS). It identifies ICT infrastructures, infostructures and capabilities in the CCs, the economic opportunities these may offer their ICT domestic industry, and the lessons from previous IS development experience in the European Union that could possibly be transferable.

The paper concludes that only those trajectories that offer a compromise in the Bread or Broadband dilemma, taking into account both welfare and growth issues, will be politically sustainable."

From Symposium on the Role of Scientific and Technical Data and Information in the Public Domain:

"The body of scientific and technical data and information (STI)* in the public domain is massive and has contributed broadly to the economic, social, and intellectual vibrancy of our nation. The “public domain” may be defined in legal terms as sources and types of data and information whose uses are not restricted by statutory intellectual property laws or by other legal regimes, and that are accordingly available to the public for use without authorization. In recent years, however, there have been growing legal, economic, and technological pressures that restrict the creation and availability of public-domain information—scientific and otherwise. It is therefore important to review the role, value, and limits on public domain STI."

Technology addiction makes us unwitting slaves is indeed somewhat philosophical but also a practical article with very pragmatic eye openers that touches on the contemporary issues of technological determinism vs. social constructionism discourse, especially as it pertains to the role of information technology in the information society.

The last bullet/paragraph in the story states: "Technology's promise and alluring capabilities are used to surreptitiously entrap and willingly imprison members of the information-age society instead of truly empowering them."

Perhaps the open source technologies which are usually not developed with profitability (i.e. bottom line in $$$) in mind can show that technology does not have to be entrapping and imprisoning. It is exactly this that I'm trying to argue in favor of open source software as an actor in the ecology of open source supported technology that manifests itself as an antidote to the claim that thechnologies "surreptitiously entrap and willingly imprison members of the information-age society".

Quotes from the article:

"Yet as we rush to embrace the latest and greatest gadgetry or high-tech service and satisfy our techno-craving, we become further dependent on these products and their manufacturers -- so dependent that when something breaks, crashes, or is attacked, our ability to function is reduced or eliminated. Given these frequent technical and legal problems, I'm wondering if we're as free and empowered as we've been led to believe."

"To make things worse, government practically has outsourced the oversight and definition of technology-based expression and community interaction to for-profit corporations and secretive industry-specific cartels such as the Motion Picture Association of America, the Recording Industry Association of America and the Business Software Alliance. Such groups have wasted no time in rewriting the rules for how they want our information-based society to operate according to their interests, not ours."

technology as key to democracy

| Permalink

From Switzerland sees technology as key to democracy:

“It is our mission to make modern technology accessible to everybody,” Leuenberger said. “People living in developing countries can only escape poverty if they have access to information.”

Yes, technology can be an important key to democratic development. It has been often stated that technology will solve the problems of poverty and thus bring about democratic movements. While it might be true that technology has increased productivity in certain areas around the world, it is perhaps very much debatable whether it has decreased poverty in general.

If technology is to deliver democratic 'results', it must be used with that sense and for that purpose by helping the economic development that helps the improvement of the bottom line economies.

Unfortunately, the main players in bringing the information technologies to developing countries around the world are private companies who ultimately care about their bottom line (i.e. $$$); it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.

(I’ve also elaborated on these points in these previous entries: Discord at digital divide talks, is IT alone really a solution to poverty?, access to information a solution to poverty?!, Search engine for the global poor?)

information literacy: A new kind of worker

| Permalink

(courtesy of Information Literacy Weblog)

"There is a short article in the October 2003 Library and information update A new kind of worker. It is written by three people from the UK information consultancy TFPL. It highlights some of the benefits and challenges of embedding information literacy in the workplace, and uses TFPL's "Find; Organise; Create; Use; Share; and Value" approach to comment on current developments. It also mentions this weblog ;-)"

Reference: Winterman, V., Skelton, V. and Abell, A. (2003) "A new kind of worker." Library and information update, 2 (10), 38-39. http://www.cilip.org.uk/update/issues/oct03/article4oct.html

again on SCO's 'stupid' claim

| Permalink

By far one of the best argued positions explaining the paradoxes and stupidities of SCO's claim that they 'own' Linux.

An open-source letter by Joe Firmage, a former vice president of strategy for Novell's Network Systems Group:

"OK, Sontag, fine. If you cannot inadvertently or accidentally assign your copyright, then there should be no problem in identifying exactly which portions of Linux allegedly violate SCO's rights. Simply issue a statement that identifies the offending code, stating clearly that the identification does not represent a release of rights into open source."

"The model of open science is "communistic" in the sense of community ownership--or rather community stewardship. But innumerable highly successful organizations and institutions in America are founded upon the ideal of community stewardship--including our democracy itself.
The downfall of communism was due to state control by totalitarians--an attribute embodied by today’s commercial software industry far more than by the emergent open-source science of information technology. "

launch of OAI-rights effort

| Permalink

From Open Archives Initiative and Project RoMEO Initiate OAI-rights:

"The Open Archives Initiative and Project RoMEO announce the formation of OAI-rights. The goal of this effort is to investigate and develop means of expressing rights about metadata and resources in the OAI framework. The result will be an addition to the OAI implementation guidelines that specifies mechanisms for rights expressions within the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)."
...
"The area of rights expressions is wide-open with many organizations proposing languages and mechanisms. Therefore, the OAI-rights effort will aim to be extensible, providing a general framework for expressing rights statements within OAI-PMH. These statements will target both the metadata itself and the resources described by that metadata. In the context of this broader framework, OAI-rights will use Creative Commons licenses as a motivating and deployable example."

MIT for free, virtually: OpenCourseWare

| Permalink

MIT for free, virtually (serendipitous link discovery via ResourseShelf)

The Massachusetts Institute of Technology is making its course materials available to the world for free download

"One year after the launch of its pilot program, MIT on Monday night quietly published everything from class syllabuses to lecture videos for 500 courses through its OpenCourseWare initiative, an ambitious project it hopes will spark a Web-based revolution in the way universities share information."

Let's see how far (in time and space) this ‘revolution’ will reach! Maybe, if each school does not have to (re)create the course materials from scratch, the tuition will go down! :) Or maybe someone will be making more money.

Nevertheless, in terms of information and/or knowledge sharing there ought not to be any doubt that this is a step in the right direction. Hopefully, the potentials can be utilized to benefit the society in general.

Discord at digital divide talks

| Permalink

Discord at digital divide talks:

"Sharp divisions over how to bridge the digital divide between rich and poor have emerged ahead of a UN summit on the issue in December."

No wonder... with the presence of representatives from the private sector who ultimately care about their bottom line (i.e. $$$), it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.

"African nations have been rallying behind a proposal from Senegal to set up a new 'digital solidarity fund'"

"Many industrialised nations are wary of creating a new UN fund. Instead they favour encouraging investment by private companies and re-directing existing aid."

It appears that the issue of control and profits is the sticky point. So, the question does not seem to be as weather the developing countries should be 'helped' with advanced information technology. See my entry the seriousness of equal access to information for all - Information Summit where I've tried to present my concerns.

Academia Urged To Offer Library Services To Graduates

| Permalink

Academia Urged To Offer Library Services To Graduates in ShelfLife, No. 125 (September 25 2003):

"Today's college and university students graduate expecting, even demanding, to have continued access to the kinds of information-rich facilities they grew accustomed to and relied on during their student days. So says Clifford Lynch, executive director of the Coalition for Networked Information (CNI), who argues that more must be done to accommodate these expectations. Lynch notes that the transition from an information service within higher education to one broadly available to the public is not always simple or quick. For example, there was a gap of some years between when college and university graduates first started creating demand for the Internet and when the commercial market place was prepared to service this demand, particularly at reasonable prices. Currently the demand for information services focuses on content rather than computation and communication, creating a market for the licensed, proprietary digital content that schools do not own but pay licensing fees for under contract with the publishers and other service providers who hold the rights to the content. Because many suppliers are not set up to license to individuals or want to charge absurd prices, libraries, both public and academic, represent a potential resource to serve both their graduates and the public at large. Lynch suggests that higher education institutions and their faculty have an obligation to put on their agenda the issue of making their information services available beyond their academies' walls. (Educause Review Sep/Oct 2003) http://www.educause.edu/ir/library/pdf/erm0356.pdf"

Information Quality, Liability, and Corrections

| Permalink

From Information Quality, Liability, and Corrections by Stephen Adams at Information Today:

"All of us have suffered the consequences of poor-quality information. For most of us, most of the time, the impact has minor significance and is of short duration. Perhaps we missed a bus or flight connection as a result of using an out-of-date timetable, or we lost an insurance claim because we failed to note the change in exemptions to the policy when we last renewed. As frustrating or painful as these examples may be, they are rarely fatal. However, in a small percentage of cases, poor quality information has direct, devastating consequences. For example, many of the arguments concerning personal privacy are based on the knowledge that an adverse comment on a person's reputation perpetuates itself, even after a formal retraction is published or a libel case is won. Some sorts of information are more "sticky" than others. Just as the garden weeds are more robust than the desired plants, bad information rears its ugly head more virulently than good information."

fairness in search engine results - the open source factor

| Permalink

In An Open-Source Search Engine Takes Shape there is an assumed relationship between open source, open ranking, and fairness of returned results.

Currently, all existing search engines have proprietary ranking formulas, and some search engines determine which sites to index on the basis of paid rankings. Cutting said that, in contrast, Nutch has nothing to hide and has no motive to provide biased search results.
...
"Open source is essential for transparency," he said. "Experts need to be able to validate that it operates correctly and fairly. Only open source permits this." If only a few Web search engines exist, he said, "I don't think you can trust them not to be biased."

I think this relationship is sounds. How does one test and evaluate that indeed the opens source search engine will result in 'open ranking' algorithms and thus lead to fairness?

The next issue to be dealt with is the scope and the understanding of fairness in the context of search engines. Should fairness be understood as proportional (returned results vs. the total number of searched documents), or equal coverage of the queries even though some topics of interests might be less represented on the internet. In addition, considering that no one single search engine can cover/index the entire webspace, what would be the criteria for domain/URL inclusion for indexing?

I believe that the open source search engine might be better in fairness, but there still remain a lots of issues to be dealt with as important factors in tilting the 'fairness' one way or another.

The Year of the Blog: Weblogs in the Writing Classroom

| Permalink

The Year of the Blog: Weblogs in the Writing Classroom provides a set of educational related blogging resources.

Especially interesting are the viewpoints on blogs as writing practice, blogs as class content, and Academics Who Blog (under more resources).

The writings present the 'other' side of blogs and blogging, the side that is less talked about in the press. However, this side might emerge to be the most important one as far as education, academia, and research are considered.

A great resource! Nice food for thought! :)

From Swiss demand clear goals for Information Summit:

"At the opening of the third preparatory meeting for the summit in Geneva, Leuenberger set out his recommendations before more than 1,900 representatives from 143 nations, the private sector and non-governmental organisations. Leuenberger added that the main bone of contention was finding ways to finance the summit initiatives and he urged the participating nations to present more concrete ideas by September 26, the last day of the prep talks."

"The three-day summit, which kicks off in Geneva on December 10, hopes to develop an action plan to provide equal access to information for all people around the world."

The initiatives for equal access to information for all the people around the world are to be admired at least for recognizing the importance of access to information in today’s information society (or better said society relaying so much on information exchange).

However, with the presence of representatives from the private sector who ultimately care about their bottom line (i.e. $$$), it can hardly be expected that much will be achieved in terms of equality to information access. This sort of exercises lead nowhere unless there is a long stick that the ITU can use to implement the promoted initiatives, to even modestly tilt the balance of access to information.

What usually happens in such meetings though is that the private sector that controls the means of access as well as the information itself is unwilling to give up some of its power. So, what ends up happening is that the current private-sector players join forces with local private sector players around the world, as if that means equal access. The private sector is interested about the bottom line whether it is in the developed countries or in the developing countries. So, instead of equal access to information for all, the current private sector players extend their control of access to information even further, paradoxically via the vehicles (such as this summit) that were supposed to enable the equal access.

What is a possible solution? Perhaps the state representatives to the Information Summit need to change their policies in terms of access to access technologies and information. These types of summits are good, but ultimately the mains responsibilities reside with the states themselves, with NGOs playing an important role in pushing their governments to enact 'fair' policies regarding access technologies and access to information.

What's a good learning culture?

| Permalink

In What's a good learning culture? George presents a very informative and interesting personal experience about satisfying information seeking needs.

Apart from the fact that "information need" seems to be used interchangeably with/for "need for knowledge" (I'm of the opinion that information does not equal knowledge, and perhaps as such the processes to satisfying information needs would differ from those for satisfying knowledge needs), I agree with George that informal means of seeking information have become part of our lives indeed.

In what George has written, few parameters emerge: structured vs. unstructured content, structured vs. unstructured communication (for content delivery), formal vs. information contexts.

Depending on the particular information need at hand, some combinations of the above parameters is applied in the process of information seeking. If we are to identify the tools that help us carry the information seeking process, a distinction will be apparent. For example, e-mail communication is not a structured content. One to one e-mail communication does not appear structured and yet there might be an underlying communication structure (not necessarily apparent) because of the common background between the participants. On the other side, many-to-many communication (i.e. discussion lists) may presents a semi-structured communication process and semi-formal context, depending on how the discussion is run (moderated, semi-moderated, etc.).

Models of Collaboration

| Permalink

Models of Collaboration presents, informs and suggests five models of collaboration around the contexts/situation of: Library, Solicitation, Team, Community, and Process Support.

"In this guest editorial we examine five models for collaboration that vary from barely interactive to intensely interactive. Granted the CS definition for collaboration requires some level of interaction by two or more people, and in the past we have said that reciprocal data access (such as you would find in a library or repository) is not collaboration, we have also said that technology, content and process are critical for any type of collaboration. This being the case we are expanding our definition of collaboration (slightly) to include content libraries as most of the vendors in this area have added collaborative functionality. In addition, content is often critical for a collaborate interaction to occur…" - David Coleman

(Found the link via eLearnspace.org entry)

Scholarly Electronic Publishing Bibliography - Version 50

| Permalink

The Scholarly Electronic Publishing Bibliography

"This bibliography presents selected English-language articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet. Most sources have been published between 1990 and the present; however, a limited number of key sources published prior to 1990 are also included. Where possible, links are provided to sources that are freely available on the Internet."

Check the TOC.

Here is an article describing the bibliography, publsihed in The Journal of Electronic Publishing.

DEFINING DIGITAL LIBRARY USERS AND NEEDS

| Permalink

Via ShelfLife, No. 121 (August 28 2003)

DEFINING DIGITAL LIBRARY USERS AND NEEDS

"A collaborative Digital Library is a user-centered system. In addition to the traditional purpose of providing resource discovery services, the system might also provide specialized services for some classes of users, ranging from basic alerting and selective dissemination services to complex, virtual community working spaces. In this sense the Digital Library represents a special workspace for a particular community, not only for search and access but also for the process, workflow management, information exchange, and distributed work group communications. But most digital library models are based on non-digital environments. As a result, the perceptions of users and the roles they play are biased by traditional views, which might not be automatically transferable to the digital world. Nor are they appropriate for some new emerging environments. New models are challenging traditional approaches. In many cases they redefine the roles of actors, and even introduce new roles that previously did not exist or were not performed by the same type of actor. With no means of formal expression, it is difficult to understand objectively the key actor/role issues that arise in isolated Digital Library cases, or to perform comparative analysis between different cases. This directly affects how the Technical Problem Areas identified by the June 2001 DELOS/NSF Network of Excellence brainstorming report will be addressed. The report states that the highest-level component of a Digital Library system is related to the system's usage. By understanding the various actors, roles, and relationships, digital libraries will improve their ability to enable optimal user experiences, provide support to actors in their use of Digital Library services, and ultimately ensure that the information is delivered or accessed using the most effective means possible. (Report, DELOS/NSF Working Group, 13 June 2003)"

Greenstone: open source Digital Library (DL) system

| Permalink

Greenstone Digital Library Open Source Software:

"Greenstone is a suite of software for building and distributing digital library collections. It provides a new way of organizing information and publishing it on the Internet or on CD-ROM. Greenstone is produced by the New Zealand Digital Library Project at the University of Waikato, and developed and distributed in cooperation with UNESCO and the Human Info NGO. It is open-source, multilingual software, issued under the terms of the GNU General Public License"

DSpace: open source Digital Library (DL) system

| Permalink

From http://www.dspace.org/:

"DSpace is a groundbreaking digital institutional repository designed to capture, store, index, preserve, and redistribute the intellectual output of a university’s research faculty in digital formats."

"Developed jointly by MIT Libraries and Hewlett-Packard (HP), DSpace is now freely available to research institutions worldwide as an open source system that can be customized and extended. DSpace is designed for ease-of-use, with a web-based user interface that can be customized for institutions and individual departments."

Permanent Public Online Access to Government Information

| Permalink

From Agreement Ensures Permanent Public Online Access to Government Information:

"August 25, 2003 — Public Printer, Bruce R. James, and Archivist of the United States, John W. Carlin announced an agreement that will enable the Government Printing Office (GPO) and the National Archives and Records Administration (NARA) to ensure free and permanent access to more than 250,000 federal government titles available through GPO Access (http://www.gpoaccess.gov)."

"A more recent study carried out by the American Association of Law Libraries, “State by State Report on Permanent Public Access to Electronic Government Information,” defined permanent public access “as the process by which applicable government information is preserved for current continuous and future public access.”"

Economics of open access

| Permalink

Curtesy of Open Access News:

"Catherine Zandonella, Economics of open access, TheScientist, August 22, 2003. The good news: she covers the controversy in detail, moving well past the cliches and misunderstandings common just a few months ago. The bad news: except for one line on PubMed Central, she ignores the economics of open-access archives. (PS: For the record, she also misquotes me. I said that even if an open-access journal publisher went out of business or were bought by a commercial publisher, the back runs of its open access journals would remain openly accessible, not that they would remain in the "public domain".)"

As I've tried to explain in my comment to Repetition, Repeating, etc, this is a further response to the above as well as Steven's follow-up entry Redundancy all over again.

Many articles and pieces on blogging have been written from different perspectives and viewpoints. Different people blog for different reasons depending on their background, education, profession, current situations, world view, etc...

Most blog entries appear to be taking the form of responses and comments on other articles or blog entries, as well as links to relevant resources. Yet others, as has been explained in "Scholars Who Blog" in Chronicle of Higher Education have taken a form of research and academic publishing. Some are personal diaries. There are class discussion blogs, etc...

In any case, I don't believe bloggers should worry whether they are 'repeating' things across the blogsphere and apparently creating 'redundancy'. As you have noticed, I've put both redundancy and repetition within quotes to denote that perhaps it is paradoxical to talk of such functions in the blogsphere.

Individuals have things to say and blogs have opened another venue for doing so. Indeed a venue much different than before, because blogs and the corresponding blogging related tools provide an open communication and interconnectedness amongst individuals with similar interests.

For example, the new semester is about to start in September. Many graduate classes are conducted in seminar form and thus provide a platform for discussing interesting and relevant issues. For me, this will be another trigger point for writing blog entries. For one, writing about issues one is concerned ensures better understanding and comprehension of the subject. Maybe an interesting individual will comment on a blog entry with a twist that can bring new learning experience and a viewpoint not initially and readily available considering constrains resulting from previous experience and materials read. If nothing more, it is a learning experience that can expand beyond things immediately reachable.

One could argue that all of this could be done via discussion lists and posting on a regular personal page. Perhaps it could. However, there is an advantage to blogging vs. regular personal page because blog entries are usually fed into aggregators, read on regular basis by those interested, and interconnected with similar entries on other blogs. Thus, in a sense facilitating targeted information finding and learning.

As far as mailing lists are concerned, one difference comes in mind. While mailing lists are topic centric, blogs are multi topic centric, with more than one topic centers in a blog (i.e. categories). Thus, blogs provide more rounded profile of an individual expressing his/her opinions, ideas, and thoughts. In addition, besides blog entry interconnectedness, the categorization of entries can facilitate topic of interest interconnectedness. Following the traces (i.e. links) from one blog entry to another ensures multiple opinions as in mailing lists.

Needless to say, any such discussion whether facilitated through blogs and blogging, discussion lists, class seminars, or other discussion platforms, will result in same or similar issues being discussed more than once. But this is good as each instance has its own peculiarities and surrounding which makes it unique for the participants.

For this reasons, when viewed from participants' perspective, the issues of redundancy and repetition are non-issues. Even when viewed from the blogsphere ecosystem perspective repetitions of issues discussed tell much about the blogsphere itself and the topics/issues being discussed in particular.

Releated:
how blogs effect each other
blogs, minds, documents, representations
Information Relevance

MIT's OpenCourseWare

| Permalink

MIT's OpenCourseWare project, is yet another manifestation of the philosophy of 'openness'.

In terms of use, the OpenCourseWare is licensed under a Creative Commons License, which means that course materials available can be shared freely and openly for non-commercial purposes, and any derivative work should be distributed with the same licence as the one by which OpenCourseWare material is made available.

Great!

Link: The Open Archives Initiative

| Permalink

From The Open Archives Initiative:

"The Open Archives Initiative develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content. The Open Archives Initiative has its roots in an effort to enhance access to e-print archives as a means of increasing the availability of scholarly communication. Continued support of this work remains a cornerstone of the Open Archives program."

open access journals: Revolution or evolution?

| Permalink | 1 Comment

From EMBO Reports 4, 8, 741–743 (2003) in Revolution or evolution? by Susan R. Owens , about how:

"A shift to an open-access model of publishing would clearly benefit science, but who should pay?"

Well, if the research is funded by taxpayers' money (federally funded research), it would be appropriate for the end user to have free access to such scientific information. This still calls for organizing structure to maintain and disseminate the research in terms of journals and other publications.

An attempt for practical solution is well argued by Public Library of Science (PLoS) founder in A Fight for Free Access To Medical Research:

“The PLoS plan is simple in concept: Instead of having readers pay for scientific results through subscriptions or other charges, costs would be borne by the scientists who are having their work published -- or, practically speaking, by the government agencies or other groups that funded the scientists -- through upfront charges of about $1,500 an article.”
bq. “The shift is not as radical as it sounds, the library's founders argue. That is because government agencies and other science funders are already paying for a huge share of the world's journal subscriptions through "indirect cost" grants to university libraries, which are the biggest subscribers.”

Related:
Public Library of Science - more on open access
Open Access to Scientific Research
open access to federally funded research

other facets of open source

| Permalink | 2 TrackBacks

In response to George's entry Open Source as a Social Movement I would like to add that open source should be looked beyond the software space. Open source software is just one manifestation of the open source philosophy, and the open source as a social movements is yet another manifestation of the open source philosophy--in a way more abstract than the open source software given the practical results, its products, as explained in Open Source as a Social Movement.

The 'source' in open source can mean different things to different people and contexts, depending on the level of abstraction and/or pragmatics:


  • to the software development is the code

  • to the publishing function it the content therefore the 'open content'

  • to the access function is the process of communication, therefore 'open access'

  • etc...

Independently of the various manifestations of the open source, there appear to be two important factors in trying to understand and elaborate the various manifestations: the open content and open communication, aided by the concept of translation. I have elaborated many of these items in the corresponding entries [follow the links] as well as in the following two categories: Open Content and Open Communication, The Open Source Philosophy, and Actor-Network theory & methodology.

From a more social perspective, in the open source Internet as a possible antidote to corporate media hegemony it is argued that the open source Internet, as a result of open source movement, manifests itself as a possible antidote to the corporate media hegemony, not only in the US but also throughout the world.

Public Library of Science - more on open access

| Permalink

As an addition to my previous entry regarding the Framing the Issue - Open Access by ARL, it is informative to note that the Public Library of Science (PLoS) emerges as a practical attempt to establish such open access scientific/research publication. In A Fight for Free Access To Medical Research it is written:

"Why is it, a growing number of people are asking, that anyone can download medical nonsense from the Web for free, but citizens must pay to see the results of carefully conducted biomedical research that was financed by their taxes?"

Here is the role of the PLoS:

"The Public Library of Science aims to change that. The organization, founded by a Nobel Prize-winning biologist and two colleagues, is plotting the overthrow of the system by which scientific results are made known to the world -- a $9 billion publishing juggernaut with subscription charges that range into thousands of dollars per year."

and the benefit of open access:

"For scientists, the benefits would extend well beyond being able to read scientific papers for free. Unlike their ink-on-paper counterparts, scientific papers that are maintained in open electronic databases can have their data tables downloaded, massaged and interlinked with databases from other papers, allowing scientists to compare and build more easily on one another's findings."

Do we need any more arguments about why taxpayer funded research publications should be accessible for free? Yes, we could go on and on trying to explicate the benefit of free and open access to scientific information, as many have done. However, the above argument is simple and convincing. :) Perhaps not to the commercial publishing enterprises.

Related:
Open Access to Scientific Research
open access to federally funded research

open access to federally funded research

| Permalink | 1 TrackBack

ARL's (Association of Research Libraries) page Framing the Issue - Open Access provides detailed information about open access to scholarly publication and research.

The following definition of open access to scholarly and scientific information is provided:

"As used by ARL, open access refers to works that are created with no expectation of direct monetary return and made available at no cost to the reader on the public Internet for purposes of education and research. The Budapest Open Access Initiative stated that open access would permit users to read, download, copy, distribute, print, search, or link to the full texts of works, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the Internet itself."

The argument is that any government funded research (and its corresponding publications) should be free to be accessed by anyone. This is rather a specific proposition related to government funded research. How about open access to all scholarly publications? What factors need to be in place to make this happen? For pros and cons argument please see open access to scientific information.

the cost of digital content and digital libraries

| Permalink

(via ShelfLife, No. 117 (July 31 2003))

DIGITAL TECHNOLOGY: DOES IT PAY?
Economic Factors of Digital Libraries

"The literature is full of articles about digital projects, new technologies and methods, research, development and user studies, but the economic aspects of managing digital content and establishing digital libraries are woefully under-represented. In this issue of the Journal of Digital Information (JODI) dedicated to the theme of economics, the editors grapple with the choices made by individuals, institutions and communities as they work to balance the desire to go digital with the reality of scarce resources. There are several components to be considered in cost-evaluating digital libraries. In addition to the immediate start-up costs of either creating or purchasing digital content, institutions have to consider the expenses associated with providing patrons with access to that content, as well as the implicit costs of preserving, managing and maintaining digital resources for the long term. One problem is that, instead of replacing
print content, electronic journals are often treated as a value-added service, meaning that the library budget appears to be shrinking for the same amount of information resource. (Journal of Digital Information 9 Jun 2003)"

Open content and value creation

| Permalink

From First Monday
Open content and value creation by Magnus Cedergren:

"In this paper, I consider open content as an important development track in the media landscape of tomorrow. I define open content as content possible for others to improve and redistribute and/or content that is produced without any consideration of immediate financial reward — often collectively within a virtual community. The open content phenomenon can to some extent be compared to the phenomenon of open source. Production within a virtual community is one possible source of open content. Another possible source is content in the public domain. This could be sound, pictures, movies or texts that have no copyright, in legal terms."

Not that the "open content phenomenon can to some extent be compared to the phenomenon of open source", from another perspective perhaps it is more appropriate to look at open source as open content. I would argue that open source (as related to software) is a special case of open content. I guess my definition of open content then becomes broader than what the above article suggests in relation to media.

Related:
Open Content and Open Communication

From First Monday
The Augmented Social Network: Building identity and trust into the next-generation Internet

"This paper proposes the creation of an Augmented Social Network (ASN) that would build identity and trust into the architecture of the Internet, in the public interest, in order to facilitate introductions between people who share affinities or complementary capabilities across social networks. The ASN has three main objectives: 1) To create an Internet-wide system that enables more efficient and effective knowledge sharing between people across institutional, geographic, and social boundaries; 2) To establish a form of persistent online identity that supports the public commons and the values of civil society; and, 3) To enhance the ability of citizens to form relationships and self-organize around shared interests in communities of practice in order to better engage in the process of democratic governance. In effect, the ASN proposes a form of "online citizenship" for the Information Age."

Certainly an interesting concept. Perhaps this is one step towards the publishing of research material free from comercial publishers.

open digitial libraries - the open access way

| Permalink

In In DSpace, Ideas Are Forever NYT reports on institutional libraries (i.e. digital library repository) and the publishing practice.

"The Journal Backlash Institutional repositories are novel in that much of their content sidesteps academic publishers, which have come under attack from the so-called open-access movement. Some scholars complain that journals delay publication of research and limit the audience because of their soaring costs."

"Out of frustration with journals' limitations, some scientists have started their own archives."

Certainly there seems to be a momentum, rightfully so, against the bureaucratic delays in publishing research articles by publishers of journals and other research periodicals. It appears that the open access movement might be restructuring the publishing of research material in a fundamental way.

However, before any major change does happen, the issues of authority will have to be fundamentally changed in researchers’ perceptions. Whatever authority lies within the peer-reviewing process of a particular journal, will perhaps have to shift to individual universities or other non-for-profit institutions.

Information Access Alliance

| Permalink

From Information Access Alliance

"The Information Access Alliance believes that a new standard of antitrust review should be adopted by state and federal antitrust enforcement agencies in examining merger transactions in the serials publishing industry. When reviewing proposed mergers, antitrust authorities should consider the decision-making process used by libraries – the primary customers of STM and legal serial publications – to make purchasing decisions. Only then will these mergers be subjected to the degree of scrutiny they deserve and adequate access be preserved."

A noble and very practical effort .... Let’s just hope that the 'right' ears are listening and the powerful publishing corps do not block this effort. See my arguments in open access to scientific information, a response to the article Free Public Access to Science—Will It Happen? (July 7, 2003).

DIGITAL SHARING GOES DEEPER

| Permalink

(courtesy of ShelfLife, No. 116 (July 24 2003))

"Libraries are collaborative by nature, sharing expertise, staff and ideas. Shared cataloguing is a good example: a cataloguer in one library creates a record about a book for use in a central database rather than just his own system, and everyone else who contributes to that database can download that record into their local systems rather than re-doing it themselves.
Now librarians are talking about extending that collaboration and "deep sharing" digital content by creating a Distributed Online Digital Library. The DODL would depart from the status quo in terms of function, service, reuse of content and library interdependency. First, it would allow a common interface for distributed collections, rather than the widely divergent "looks" of today's linked collections. Second, and more radically, it would allow both librarians and end users to download digital master files as malleable objects for local recombinations. This means they could be enriched with content from librarians or teachers, specially crafted for particular audiences, and unified in appearance and function. A user could download, combine, search, annotate and wrap the results in a seamless digital library mix for others to experience. The services such deep sharing could provide are staggering, and the economics are just as attractive. Imagine 30 libraries coordinating to digitize their collections. Each funds individual parts of the project, but all equally share in the sum of their efforts. So for the cost of building one digital object and depositing it in the DODL, each library would gain 30 downloadable objects. As participation becomes more widespread, the equation becomes even more compelling. (Educause Review Jul/Aug 2003) http://www.educause.edu/ir/library/pdf/erm0348.pdf"

News from the open access movement

| Permalink

Open Access News is an excellent up-to-date blog dedicated to:

"Putting peer-reviewed scientific and scholarly literature on the internet. Making it available free of charge and free of licensing restrictions. Removing the barriers to serious research. "

You may also want to check the The SPARC Open Access Newsletter and its archives.

(found this link via ResourceShelf)

what after open processes and open content?

| Permalink

In O'Reilly Gazes Into the Future of Open Source Peter Galli presents some of O’Reilly’s thoughts about the future of the open source. What is most interesting in O’Reilly’s presentation at the Oscon conference is the recognition that the open source is more than just about software. The open source software is just one practical instance of the open source philosophy. The article is not clear about the why, how and what they mean by paradigm shift:

“The new rules governing the Internet paradigm shift are based on the fact that an open architecture inevitably leads to interchangeable parts; competitive advantage and revenue opportunities move "up the stack" to services above the level of a single device; information applications are decoupled from both hardware and software; and lock-in is based on data and not on proprietary software, he said.“

However, they are perhaps on the right track suggesting that the competitive advantage in the future will not come from the proprietary hardware and the software, but from the higher levels in the information services products. The openness inevitably will lead the competitiveness in the upper stacks of information service delivery process.

Perhaps the content will matter more as it should… but then, what happens when the open source philosophy is applied to the content as well? Where will the competitive advantage come from if dealing with open content? Perhaps the processes around the content creation, organization, delivery and sharing? How about when this process becomes ‘open process’ as well? Interestingly, some of this open process is imbedded in the open source software already… hmmm…

media technologies for open communication

| Permalink

While I agree in principle with Fiske in rejecting the technological determinism point of view, I also believe that due to the social construction of communication technologies there ought to be some characteristics of particular technologies that are better fit to serve the designer. My argument is that if a particular technology was designed to serve the corporate interest, most of its features will be driven to maximize the profits. [see the entry on adaptive structuration for this argument]

In contrast, if a group of people is about to design technology for open communication and democratic access to information, the technology in question will have such features as to enable easy of access to information and make it hard for that technology to be used for restriction purposes. But again, it isn’t the technology per se; it is the social structures that tilt technology use for particular purposes.

Unfortunately, most of the communication technology in use today has been built and appropriated for profit making activities. Example: cable could have been made interactive, but it wasn’t. The Internet and many of its communication tools exhibit characteristics of open communication. However, even here the corporate power has entered the arena attempting to strangulate the open communication characteristics by controlling the access…

Reference:
Fiske, J. (1996). Media matters: Race and Gender in U.S. Politics. Minneapolis: University of Minnesota Press

Related:
The open source Internet as a possible antidote to corporate media hegemony

technologies for Free Speech

| Permalink

From Hacking for Free Speech:

"The free exchange of information over the Internet has proven to be a threat to the social and political control that repressive governments covet. But rather than ban the Internet (and lose valuable business opportunities), most repressive governments seek to limit their citizens' access to it instead."

"To do so, they use specialized computer hardware and software to create firewalls. These firewalls prevent citizens from accessing Web pages - or transmitting emails or files - that contain information of which their government disapproves."

"Hacktivism's approaches raise a number of interesting questions. Can hacktivism really work? That is, can a technology successfully complement, supplant, or even defy the law to operate either as a source of enhanced freedom (or, for that matter, social control)? On balance, will technological innovation aid or hinder Net censorship?"

In response to the 3rd quote from above, whether the technology can “successfully complement, supplant, or even defy the law to operate either as a source of enhanced freedom (or, for that matter, social control)”, the appropriate framework needs to be applied. From the technological determinism point of view it is apparent that the technology does exhibit characteristics that would make it as a source of enhanced freedom or as a tool for a social control. Which in turns leads us to social constructionism to understand how these technologies are constructed in the first place, and why have they acquired the attributes and the properties they have?

Certainly, the appropriate framework cannot be exclusively social constructionism or technological determinism. It has to be a mixture of both as information technology does not exists in isolation—it has been created as a result of the social structures that initiated it (for a purpose) and it has been embedded afterwards. However, once the information technology becomes part of the social ecosystem (this is an iterative process in itself), depending on its properties (whether they are restrictive or exhibit characteristics of open communication and free exchange of ideas) it will project is properties onto the structures within which it is embedded.

Thus, one might see the open source technology as instigator of open communication and exchange of open content, precisely for the reason that it has been build with such attributes and properties.

It is not hard to see that a technology which does not provide the functionality for its end users to freely communication among themselves cannot be used “as a source for enhanced freedom” (i.e. TV as a one way communication tech). In turn, the open source internet manifests itself in many ways that lets the users communicate amongst themselves without control from a third party. Perhaps this positions the open source Internet as a possible antidote to corporate media hegemony.

democracy through open source

| Permalink

From Democracy Design Workshop at New York Law School Awarded $80,000 Grant By Rockefeller Brothers Fund

"The Democracy Design Workshop (www.nyls.edu/democracyhome.php) is directed by Beth Simone Noveck, an associate professor of law at New York Law School, where she also directs the Institute for Information Law and Policy. She is a founding fellow of the Information Society Project at Yale Law School. The Workshop aims to be a meetinghouse for thinkers and practitioners who, through research, dialogue and design, explore how to use technology to strengthen democracy online and off."??

"We are delighted by the Rockefeller Brothers Foundation support for our work," Noveck said. "By using cutting-edge, open-source technology for the promotion of strong democracy, we can create a tool for the exchange of best practices and ideas in collaboration and participation, helping practitioners learn from and engage with one another." Noveck added, "The Inventory is our flagship civic innovation design project. It is the knowledge base to support our civic innovation endeavors and represents precisely the kind of interdisciplinary, problem-solving work that should be part of contemporary legal education."

the blogsphere topology

| Permalink

In The Network Is The Computer John Hiler presents an analogy between ants and their colonies and the blogs and blogsphere. An interesting analogy.

How does one go about analyzing this analogy further and perhaps providing explication about the topology called 'blogsphere'? What should the properties of the blogs and the way they are connected amongst themselves be to construct a blogsphere?

Perhaps we should be talking about multitude of blogspheres categorized based on topical, temporal, spatial, methodological, contextual, situational, or cognitive relevance.

In how blogs effect each other I've suggested to use the actor-network theory its methodology as the appropriate framework to study the way blogs (the actual actors) are interconnected amongst themselves into a network topology (or the blogsphere).

Related:
blogs, minds, documents, representations

By Mentor Cana, PhD
more info at LinkedIn
email: mcana {[at]} kmentor {[dot]} com

About this Archive

This page is an archive of recent entries in the Open Content, Comm. category.

open access is the previous category.

Find recent content on the main index or look in the archives to find all content.