July 2003 Archives

why machines can't reason or think

| Permalink | 1 TrackBack

In Helping Machines Think Different, Noah Shachtman at Wired News reports on the LifeLog project led by Ron Brachman :

""Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks. These systems will need to reason, learn and respond intelligently to things they've never encountered before," said Ron Brachman, the recently installed chief of Darpa's Information Processing Technology Office, or IPTO."

An example of what IPTO/PAL might do:

"If people keep missing conferences during rush hour, PAL should learn to schedule meetings when traffic isn't as thick. If PAL's boss keeps sending angry notes to spammers, the software secretary eventually should just start flaming on its own."

and, this is supposed to be achieved through a proposed technique(s) called REAL-WORLD REASONING based on these three concepts: 1) High-performance reasoning techniques, 2) Expanding the breadth of reasoning and hybrid methods, and 3) Embedded reasoners for active knowledge bases.

Now, in any dictionary, the word 'reason' has to do with mental states, analytic thought, logical deductions and inductions, etc, all of which come around to depend on the thinking process, which is a mental state that ultimately has to do with the human mind. If we are to agree that the human mind is a manifestation of the electro-mechanical-biological human brain, than the approach of rules and logical entities interconnected amongst themselves might some day bring about a machine that might 'act' as the human mind.

However, what is most interesting, it does not seem that there have been any attempts to look at the human process of 'reasoning' and 'thinking' from an angle different than the electro-mechanical-biological viewpoint. A brief reading of the REAL-WORLD REASONING proposal does not reveal any new insights except that it proposes another way based on the information-processing understanding of the information where bits of information are manipulated using relevance judgment for ‘aboutness’ assessment. Perhaps the issue or relevance as used over the past few decades needs to be reassessed?

So, what is uniquely different with the REAL-WORLD REASONING proposal?

The reason why the efforts of AI (artificial intelligence) so far have proven unsatisfactory in emulating the human reasoning and thinking process might have to do with the very fact that so far the approaches have been only mechanistic, thus incompatible with the very nature of the human experience and with the human mind in particular. So, we want computers to think intelligently, reason, learn, and think, and yet we apply mechanistic approaches to attempt to achieve these functions which require intellect?

It would be nice to hear if anyone has come around to know of an effort, practical or theoretical, that attacks the issues of machine 'thinking' and 'reasoning' from a perspective fundamentally different than the information-as-thing understanding (i.e mechanistic). Anyone?

link: World-Information.Org

| Permalink

Courtesy of Information Literacy Weblog:

"For those interested in information society issues, and interesting website is World-Information.Org This is "a collaborative effort of organizations and individuals who are directly concerned with issues of participatory involvement in Information and Communication Technologies, and the Internet as we know it today." It involves artists, scientists and others, and encourages a creative and critical approach to the internet and digital media. They organise conferences and exhibitions (with some online material), and their Read me section includes some interesting material (e.g. on "disinformation", the role of government intelligence etc.)"

Information Access Alliance

| Permalink

From Information Access Alliance

"The Information Access Alliance believes that a new standard of antitrust review should be adopted by state and federal antitrust enforcement agencies in examining merger transactions in the serials publishing industry. When reviewing proposed mergers, antitrust authorities should consider the decision-making process used by libraries – the primary customers of STM and legal serial publications – to make purchasing decisions. Only then will these mergers be subjected to the degree of scrutiny they deserve and adequate access be preserved."

A noble and very practical effort .... Let’s just hope that the 'right' ears are listening and the powerful publishing corps do not block this effort. See my arguments in open access to scientific information, a response to the article Free Public Access to Science—Will It Happen? (July 7, 2003).


| Permalink

(courtesy of ShelfLife, No. 116 (July 24 2003))

"Libraries are collaborative by nature, sharing expertise, staff and ideas. Shared cataloguing is a good example: a cataloguer in one library creates a record about a book for use in a central database rather than just his own system, and everyone else who contributes to that database can download that record into their local systems rather than re-doing it themselves.
Now librarians are talking about extending that collaboration and "deep sharing" digital content by creating a Distributed Online Digital Library. The DODL would depart from the status quo in terms of function, service, reuse of content and library interdependency. First, it would allow a common interface for distributed collections, rather than the widely divergent "looks" of today's linked collections. Second, and more radically, it would allow both librarians and end users to download digital master files as malleable objects for local recombinations. This means they could be enriched with content from librarians or teachers, specially crafted for particular audiences, and unified in appearance and function. A user could download, combine, search, annotate and wrap the results in a seamless digital library mix for others to experience. The services such deep sharing could provide are staggering, and the economics are just as attractive. Imagine 30 libraries coordinating to digitize their collections. Each funds individual parts of the project, but all equally share in the sum of their efforts. So for the cost of building one digital object and depositing it in the DODL, each library would gain 30 downloadable objects. As participation becomes more widespread, the equation becomes even more compelling. (Educause Review Jul/Aug 2003) http://www.educause.edu/ir/library/pdf/erm0348.pdf"

News from the open access movement

| Permalink

Open Access News is an excellent up-to-date blog dedicated to:

"Putting peer-reviewed scientific and scholarly literature on the internet. Making it available free of charge and free of licensing restrictions. Removing the barriers to serious research. "

You may also want to check the The SPARC Open Access Newsletter and its archives.

(found this link via ResourceShelf)

link: librarystuff.net

| Permalink

Stevens at librarystuff has done a great job in proving lengthy repository (ongoing and fresh) of library resources.

librarystuff: "The library weblog dedicated to resources for keeping current and professional development."

open source is not about hackers and anarchy

| Permalink | 2 Comments | 1 TrackBack

In the July 7th, 2003 edition of HBS Working Knowledge, in The Organizational Model for Open Source, Mallory Stark interviews Siobhán O'Mahony who is an assistant professor in the Negotiation, Organizations, and Markets group at the Harvard Business School.

The article raises and discusses the possible negative implications of nonprofit organizations around the open source software activities, as well as the implications of corporate actors’ involvement in the open source software production.

In one of the responses O'Mahony states:

“Thus, hackers who contribute to the open source community are often intrinsically motivated.”

The article appears to equate to some extend the open source software production with hackers and hacker culture. While it is undeniable that ‘hackers’ have contributed greatly to the pool of open source software, open source is more than just what hackers contribute. It would not be surprising to hear that many who contribute to open source software do not consider themselves hackers, at least not in the sense and connotation the word ‘hacker’ is understood by the general public.

Even in Eric Raymond’s definition as it appears in this article, defining “hackers as those who love programming for the sake of doing it, for the sake of obsessively solving a problem”, it is hard to necessarily and exclusively equate hackers with contribution to the open source. Perhaps, many contribute to a particular open source software package for reasons (many of them) totally different than obsessiveness. Social contribution is one of them … not all contributors to open source are obsessive programmers … Besides, some who love programming do it obsessively by working for a company for a pay.

Peer recognition is purported as one of the main reason for contribution to open source. Needless to say, all people, everywhere, would like to be recognized for the work they do, whether it is open source of closed source.

Further, I’m a bit not clear as to where (and why) is the contradiction in crating nonprofit foundations to help in ‘managing’ the open source activities:

“So I suppose what can be considered to be contradictory is that many community-managed open source projects have incorporated and created nonprofit foundations with formal boards and designated roles and responsibilities”.

Open source is not about anarchy; at least it does not appear to be so. Thus, unless orderly communication, collaboration, and coordination can be achieved without a formal organizational structure, non-profit foundations can play a role to moderate the activities of the open source software productions. After all, software production requires order, planning, and understanding of roles and responsibilities, be it open source or closed source.

F.C.C. Media Rule Blocked in House in a 400-to-21 Vote

| Permalink

From F.C.C. Media Rule Blocked in House in a 400-to-21 Vote:

"WASHINGTON, July 23 — The House of Representatives overwhelmingly passed legislation today to block a new rule supported by the Bush administration that would permit the nation's largest television networks to grow bigger by owning more stations.
If, as is becoming more likely, the provision survives in final legislation, President Bush will face a difficult political predicament. He could carry out his veto threat and alienate some of his traditional constituents, which include several conservative organizations opposed to a number of new rules adopted by the F.C.C. Or, he could sign the legislation, abandon the networks and undercut his own advisers who have recommended that he reject the legislation."

Search engine for the global poor?

| Permalink

Beth @ IDBlog reports about BBC's article World's poor to get own search engine with the following quote:

"Researchers at the Massachusetts Institute of Technology (MIT) argue that existing Web technologies cater to "Western" users, who are "cash-rich but time-poor." Users in poor countries, they say, where phone lines can be hard to come by and many Internet connections are extremely slow, are in a very different boat: little money but lots of time. To address this gap, researchers are developing a search engine that sends requests by e-mail to MIT, where computers perform searches and return e-mail lists of filtered results the next day. The premise of the system, according to MIT's Saman Amarasinghe, is that "developing countries are willing to pay in time for knowledge." Because those who could benefit from the search engine have only very slow Internet connections, the software is being distributed on CDs to users in developing countries."

A novel approach indeed. If this idea proves successful, hopefully, it does NOT get appropriated as THE solution. Some may find it 'unnecessary' to upgrade their facilities because 'they have a solution'. In this process, the NGO's and other foreign non-profit organizations might be tempted to reduce the funding needed to improve the necessary information infrastructure that will make the MIT software/process obsolete.

The bottom line: hopefully the 'patch' is not seen as the proper cure.

blogs and ranking (real time)

| Permalink

Yet another 'how' to blog and blogging article. Provides a good summary of what blog and blogging is, with some insights about how to start, what to expects, etc... Via Open Stacks, courtesy of BeSpacific ...

From What Are Blogs and Why Is Everyone So Excited About Them?:

"Search Engine Rankings – the Advantages of a Blog

(DK): A blog can really put you on the map and Ernie can probably best attest to that. I was shocked by the impact a blog has on search engine placement. Not only does your ranking improve, but the speed your pages get added to a search engine like Google is astonishing. It used to be that I’d expect a three-month wait for a new page to show up in Google, no matter what technique I used. If I mention the page in my blog, it shows up in just a few days."

open source more than just software

| Permalink

May the Source Be With You presents rather a convincing case that open source is more than just about software:

"Can a band of biologists who share data freely out-innovate corporate researchers?"

"But hoarding information clashes directly with another imperative of scientific progress: that information be shared as quickly and widely as possible to maximize the chance that other scientists can see it, improve on it, or use it in ways the original discoverer didn't foresee. "The right to search for truth implies also a duty; one must not conceal any part of what one has recognized to be true," reads the Albert Einstein quote inscribed on a memorial outside the National Academy of Sciences offices in Washington."

"Fortunately, a potentially revolutionary counter-trend is developing and helping science return to the ideal that Einstein extolled. A small but growing number of scientists, most of them funded by the National Institutes of Health, are conducting cutting-edge research into the most complex problems of biology not in highly secure labs but on the Internet, for all the world to see. Called "open-source biology," this work is the complete antithesis of corporatized research. It's a movement worth watching--and rooting for."

a list of blog and blogging resources

| Permalink

The THE INTERNET COURSES Weblogs by Dr L. Anne Clyde, is an extensive weblog/blog and blogging resource page.

Apart from the blog and blogging resources it also provides a list of LIS related blogs.

However, the most interesting link on this page is the one pointing to a test that can answer the question Are You a Blogaholic? Try it.. it is fun :)

what after open processes and open content?

| Permalink

In O'Reilly Gazes Into the Future of Open Source Peter Galli presents some of O’Reilly’s thoughts about the future of the open source. What is most interesting in O’Reilly’s presentation at the Oscon conference is the recognition that the open source is more than just about software. The open source software is just one practical instance of the open source philosophy. The article is not clear about the why, how and what they mean by paradigm shift:

“The new rules governing the Internet paradigm shift are based on the fact that an open architecture inevitably leads to interchangeable parts; competitive advantage and revenue opportunities move "up the stack" to services above the level of a single device; information applications are decoupled from both hardware and software; and lock-in is based on data and not on proprietary software, he said.“

However, they are perhaps on the right track suggesting that the competitive advantage in the future will not come from the proprietary hardware and the software, but from the higher levels in the information services products. The openness inevitably will lead the competitiveness in the upper stacks of information service delivery process.

Perhaps the content will matter more as it should… but then, what happens when the open source philosophy is applied to the content as well? Where will the competitive advantage come from if dealing with open content? Perhaps the processes around the content creation, organization, delivery and sharing? How about when this process becomes ‘open process’ as well? Interestingly, some of this open process is imbedded in the open source software already… hmmm…

Blog Change Bot

| Permalink

Courtesy of Blogroots:

"Blog Change Bot Blog Change Bot (blogchangebot on AIM) is a blog monitoring service which updates you via AOL Instant Messanger when a blog you are interested in is updated. Subscribe via AIM or iChat to be automatically notified when the blog is updated."

'the medium is the message' or 'the medium is a message'

| Permalink

How obvious is McLuhan's statement 'the medium is the message'? When I read McLuhan for the first time, I was a bit skeptical in accepting the phrase at face value. This is perhaps due to the fact that I'm involved in everyday work with information exchange as a systems analyst where the content is an important aspect. However, the acceptance that the medium is the message is at a more profound level of practicality as well as consciousness.

The medium is independent of the content to the extent that the new technologies have tremendous impact in shaping the society by bringing new concepts of understanding of what it means to be 'here' and 'there', both spatially and in time; thus, it is more appropriate to say that 'the medium is a message' in a sense that the nature of the medium by itself is informative about the broader understanding of the new technology and its place in the appropriate social structures. The content comes to play a role once the technology somewhat establishes itself in the society (or the relevant structures) and even then it is heavily interleaved with the medium via which is transmitted.

Is there a fine line where we can claim that the content is independent from the medium? While we may consider the medium independent of the message/content, it is not as easy to consider the content independently of the medium within which it is exchanged. It seems that the content is shaped extensively by the medium for which it is intended.

media technologies for open communication

| Permalink

While I agree in principle with Fiske in rejecting the technological determinism point of view, I also believe that due to the social construction of communication technologies there ought to be some characteristics of particular technologies that are better fit to serve the designer. My argument is that if a particular technology was designed to serve the corporate interest, most of its features will be driven to maximize the profits. [see the entry on adaptive structuration for this argument]

In contrast, if a group of people is about to design technology for open communication and democratic access to information, the technology in question will have such features as to enable easy of access to information and make it hard for that technology to be used for restriction purposes. But again, it isn’t the technology per se; it is the social structures that tilt technology use for particular purposes.

Unfortunately, most of the communication technology in use today has been built and appropriated for profit making activities. Example: cable could have been made interactive, but it wasn’t. The Internet and many of its communication tools exhibit characteristics of open communication. However, even here the corporate power has entered the arena attempting to strangulate the open communication characteristics by controlling the access…

Fiske, J. (1996). Media matters: Race and Gender in U.S. Politics. Minneapolis: University of Minnesota Press

The open source Internet as a possible antidote to corporate media hegemony

on the social dimensions of information technology

| Permalink

From Social Dimensions of Information Technology: Issues for the New Millennium by G. David Garson, ed. North Carolina State University, to be published by Idea Group Publishers in late fall 2000 (http://www.idea-group.com/):

"In a related essay on "Human Capital Issues and Information Technology, " Byron L. Davis and Edward L. Kick, using educational institutions as a case in point, discuss how several "mega-forces" impact institutional functioning. They note that sociologists have long cautioned against the sort of rapid technological changes that outstrip human ability to successfully adapt to them. "Cultural lag" is in some measure inevitable, they conclude, but when social change is drastic, the consequences for the human condition, as well as human capital, can be pernicious in the extreme."

"Finally, in an important article titled "International Network for Integrated Social Science," William Sims Bainbridge, a sociologist and Science Advisor to the Directorate for Social, Behavioral and Economic Sciences of the National Science Foundation, discusses how computer-related developments across the social sciences are converging on an entirely new kind of infrastructure that integrates across methodologies, disciplines, and nations. This article examines the potential outlined by a number of conference reports, special grant competitions, and recent research awards supported by the National Science Foundation. Together, these sources describe an Internet-based network of collaboratories combining survey, experimental, and geographic methodologies to serve research and education in all of the social sciences, providing an unprecedented collection of resources available to social scientists on an international basis."

technologies for Free Speech

| Permalink

From Hacking for Free Speech:

"The free exchange of information over the Internet has proven to be a threat to the social and political control that repressive governments covet. But rather than ban the Internet (and lose valuable business opportunities), most repressive governments seek to limit their citizens' access to it instead."

"To do so, they use specialized computer hardware and software to create firewalls. These firewalls prevent citizens from accessing Web pages - or transmitting emails or files - that contain information of which their government disapproves."

"Hacktivism's approaches raise a number of interesting questions. Can hacktivism really work? That is, can a technology successfully complement, supplant, or even defy the law to operate either as a source of enhanced freedom (or, for that matter, social control)? On balance, will technological innovation aid or hinder Net censorship?"

In response to the 3rd quote from above, whether the technology can “successfully complement, supplant, or even defy the law to operate either as a source of enhanced freedom (or, for that matter, social control)”, the appropriate framework needs to be applied. From the technological determinism point of view it is apparent that the technology does exhibit characteristics that would make it as a source of enhanced freedom or as a tool for a social control. Which in turns leads us to social constructionism to understand how these technologies are constructed in the first place, and why have they acquired the attributes and the properties they have?

Certainly, the appropriate framework cannot be exclusively social constructionism or technological determinism. It has to be a mixture of both as information technology does not exists in isolation—it has been created as a result of the social structures that initiated it (for a purpose) and it has been embedded afterwards. However, once the information technology becomes part of the social ecosystem (this is an iterative process in itself), depending on its properties (whether they are restrictive or exhibit characteristics of open communication and free exchange of ideas) it will project is properties onto the structures within which it is embedded.

Thus, one might see the open source technology as instigator of open communication and exchange of open content, precisely for the reason that it has been build with such attributes and properties.

It is not hard to see that a technology which does not provide the functionality for its end users to freely communication among themselves cannot be used “as a source for enhanced freedom” (i.e. TV as a one way communication tech). In turn, the open source internet manifests itself in many ways that lets the users communicate amongst themselves without control from a third party. Perhaps this positions the open source Internet as a possible antidote to corporate media hegemony.

democracy through open source

| Permalink

From Democracy Design Workshop at New York Law School Awarded $80,000 Grant By Rockefeller Brothers Fund

"The Democracy Design Workshop (www.nyls.edu/democracyhome.php) is directed by Beth Simone Noveck, an associate professor of law at New York Law School, where she also directs the Institute for Information Law and Policy. She is a founding fellow of the Information Society Project at Yale Law School. The Workshop aims to be a meetinghouse for thinkers and practitioners who, through research, dialogue and design, explore how to use technology to strengthen democracy online and off."??

"We are delighted by the Rockefeller Brothers Foundation support for our work," Noveck said. "By using cutting-edge, open-source technology for the promotion of strong democracy, we can create a tool for the exchange of best practices and ideas in collaboration and participation, helping practitioners learn from and engage with one another." Noveck added, "The Inventory is our flagship civic innovation design project. It is the knowledge base to support our civic innovation endeavors and represents precisely the kind of interdisciplinary, problem-solving work that should be part of contemporary legal education."

mapping geo locations to cyberspace

| Permalink

I came across an interesting website (GeoURL) that maps geo locations to URLs and most interestingly provides 'neighborhood' functionality so you can see who is blogging near you or what other things around you are present in cyberspace. Nice ... :) Wanna see who is bloggin near you or has cyberspace presence?

the next big thing: open source grid computing

| Permalink

From Teaching Computers to Work in Unison:

"The grid is widely regarded as the next stage for the Internet after the World Wide Web. The Web is the Internet's multimedia retrieval system, providing access to text, images, music and video. The promise of the grid is to add a problem-solving system."
"Our belief was that open source was the best way to maximize adoption," he said. "Globus is an infrastructure technology, and it is only going to be successful if everyone uses it. And if you're doing something that is primarily funded by the government, sharing the software seemed the most appropriate thing to do."

Apparently, the difference between grid computing and distributed computing is in the ability to provide for 'collective' problem solving.

computers can't understand

| Permalink

In Making Computers Understand Leslie Walker reports on an apparent innovation/invention suggesting that computers can understand and be aware of context. While the phraseology chosen might be a journalistic lingua franca to ‘spice’ the article, nevertheless, some claims by the company are rather troublesome:

“Abir, 46, claims to have unlocked the mystery of "context" in human language with a series of algorithms that enable computers to decipher the meaning of sentences -- a puzzle that has stumped scientists for decades.”
"This man literally has figured out the way the brain learns things," Klein said. "On a theoretical level, his insight basically is this: Understanding a concept is nothing more than viewing a concept from two different perspectives."

The very title of the article "Making Computers Understand" makes you immediately skeptical. Especially troublesome is the above quote stating that “This man literally has figured out the way the brain learns things”? Isn’t it perhaps premature to claim that we have discovered how the brain works with such certainty when the history has told us that many such claims in the past have been proven wrong by later discoveries and innovations?

Further, how does one prove that two different perspectives are sufficient to understanding a concept? I hope this does not mean that they believe two perspectives are necessary since there are ‘two sides of the same story’. Usually there are more then two sides to the same story and understanding the ‘reality’ and its context probably might take much more than two perspectives.

Besides, computers can’t decipher the meaning of a sentence as claimed in the article…

"Bloggers Gain Libel Protection"

| Permalink

From Bloggers Gain Libel Protection:

"The Ninth Circuit Court of Appeals ruled last Tuesday that Web loggers, website operators and e-mail list editors can't be held responsible for libel for information they republish, extending crucial First Amendment protections to do-it-yourself online publishers.

Online free speech advocates praised the decision as a victory. The ruling effectively differentiates conventional news media, which can be sued relatively easily for libel, from certain forms of online communication such as moderated e-mail lists. One implication is that DIY publishers like bloggers cannot be sued as easily."

AOL's poor choice of words (re: AOL Journals)

| Permalink | 1 Comment

This is a further response to Beth's Origins of 'weblog' and 'blog' and her comments on my blog entriy.

I guess AOL is settling for a very poor choice of words by calling “AOL Journals” what everyone else is calling ‘blogs’ and ‘weblogs’. While AOL might not be helping in the ‘blogging’ discourse, their choice of words will not make the phenomenon any less of a phenomenon.

It appears though that AOL is trying to appropriate part of the “AOL Journals” ecosystem (see, what would AOL call the new ecosystem if not ‘blogshpere’?). Why would someone contribute with content that AOL might use it for further profits? I would like to believe that AOL’s move is not initiated for profit purposes, but, what is the corporate incentive?

One can also argue that AOL’s choice of words is actually counter productive because it seems to remove from the bloggers the most powerful incentive: the feeling that their individual blog is their own and not AOL’s.

calculations: scripts and processes as KM assets

| Permalink

In It All Adds Up the notion of calculations as knowledge assets is presented as novel and unique process in KM:

"Specifically, MathSoft is promoting the idea of using its technology to facilitate what it calls calculation management—the practice of viewing engineering calculations as knowledge assets that should be managed and reused."

Aren't the folks at CIO magazine a bit late in their 'discovery'? Mathcad calculations put up on an Intranet for a use by a community of engineers are nothing more than scripts (or processes) for performing certain functions--to produce some sort of output(s) given the set of inputs. The open source movement has been doing this for how long? :)

Relatively speaking, for a corporate culture context where knowledge (in form of scripts/calculations here) is perhaps not easily shared by individuals due to fear of loosing some advantage, this could be considered a unique knowledge management practice.

The Democracy in Cyberspace Initiative

| Permalink

From The Democracy in Cyberspace Initiative

"The Democracy in Cyberspace Initiative of the Information Society Project (ISP) at Yale Law School wants to promote democracy by developing best practices technologies and models to strengthen democracy both on-line and off. In particular, we want to cataylze the development of technologies and processes that move beyond the "thin" 'patron-client' model of government where government is a procurer of goods and purveyor of services, to focus on participatory and deliberative forms of strong democratic life. We are interested in realizing technology's potential to improve civic life and help citizens take an active and informed role in their own governance."

the blogsphere topology

| Permalink

In The Network Is The Computer John Hiler presents an analogy between ants and their colonies and the blogs and blogsphere. An interesting analogy.

How does one go about analyzing this analogy further and perhaps providing explication about the topology called 'blogsphere'? What should the properties of the blogs and the way they are connected amongst themselves be to construct a blogsphere?

Perhaps we should be talking about multitude of blogspheres categorized based on topical, temporal, spatial, methodological, contextual, situational, or cognitive relevance.

In how blogs effect each other I've suggested to use the actor-network theory its methodology as the appropriate framework to study the way blogs (the actual actors) are interconnected amongst themselves into a network topology (or the blogsphere).

blogs, minds, documents, representations

the digital divide: more than a technological issue

| Permalink

Information On-Ramp Crosses a Digital Divide

"For years, community activists and politicians around the country have talked about the need to help people who have been left behind in the digital revolution because of poverty, disabilities or fear of new technology. Without computer literacy, the argument goes, disadvantaged groups will become more excluded in the high-tech economy. Yet many efforts have meant little more than making it possible for people to surf the Web from a library terminal."
"It [WinstonNet] will allow any resident with a library card to have an e-mail account; transact business with the city, like payment of parking tickets; and store homework or other documents on a central server so they can be easily retrieved from any site on the network.

Well intentioned project with the attempt to narrow the digital divide gap. However, as in many other similar project, the most important aspect is not addresses and thought of: Just how does the technology by itself fit within the relevant social structures and fix the underlying social problems that have resulted in the digital dive?

Don't get me wrong, technology can be a great tool, but, it must be well planned to result in positive outcomes for the desired groups. Otherwise, it might just reinforce the existing social structures without any remedy to the digital divide.

on the origins of weblog and blog

| Permalink | 2 Comments

In IDblog: Origins of 'weblog' and 'blog', Beth Mazur presents a short but concise information on the origins of the words ‘blog’ and ‘weblogs’.

Just when do we expect these two words together with 'blogging' to enter the English language (and other) dictionaries? :)

W3: The Technology & Society Domain

| Permalink

From The Technology & Society Domain:

"Working at the intersection of Web technology and public policy, the Technology and Society Domain's goal is to augment existing Web infrastructure with building blocks that assist in addressing critical public policy issues affecting the Web.

Technical building blocks available across the Web are a necessary, though not by themselves sufficient to ensure that the Web is able to respond to fundamental public policy challenges such as privacy, security, and intellectual property questions. Policy-aware Web technology is essential in order to help users preserve control over complex policy choices in the global, trans-jurisdictional legal environment of the Web. At the same time, technology design alone cannot and should not be offered as substitutes for basic public policy decisions that must be made in the relevant political fora around the world."

on role of the freedom of information

| Permalink | 1 Comment


"As stated above, democracy is not just a reformation of institutions (this is its final stage) but a reformation in the minds of the society on the whole, coming from its interior forces. If the socium is ripe enough to take over the responsibility to realise itself and its future, it means that the primary reform should happen in two spheres. First of all, in education: in breaking obsolete traditions in the minds, and educating citizens with creative, free mentality, capable of actively participating in discussions and comprehending social realities and tendencies impartially, without creating idealistic abstractions in the best sense of medieval utopias, having nothing in common with the realities of the present day life. Secondly, in the sphere of information, which should lay the foundation, the global field for working out social ideas, perception of present realities and their possible evolution. If the stress is not put on those two cornerstones of democracy then countries with either totalitarian or any other unnatural type will come forward attempting to hide by the democratic forms the negative aspects of life, or the developed democratic countries would mess around the external institutions and loose the real interest for reforms and, thus, would prepare good grounds for external, illusionary but very active social activity for the sake of the activity."

In Google, Blogging and the Australian Web Model it is argued that:

"... Different to the States, the Internet and more specifically the web [in Australia] is dominated by large companies. In America the web is seen to be a place where the one-man-band and large companies can co-exist and to a large extent it is the little person who drives the agenda for the web rather than a large company."

"In contrast with other countries, there is relatively little real information developed for the Internet by grassroots people."

It is indeed apparent that in the States large companies do co-exist with the ‘one-man-band’. However, the coexistence is not at equally comparative levels and it does not appear that ‘it is the little person who drives the agenda for the web rather than a large company.’

This is not to say that the ‘little person’ does not have the venues to drive the agenda for the Web. Indeed, the open source Internet does provide the capability and the potentials for the little person to drive the agenda. However, just because the capability is there, it does not seem it is exercisable. For one, large companies in the US that are involved in one way or another with the Internet (access or content providers) are interested ultimately about the bottom line (i.e. their profits). Needless to say, if the little person’s agenda does not fit the agenda supported by the large companies, the ideas, opinions, and thoughts of the ‘one-man-band’ will be suppressed from the public discourse by means of restricted access and restricted content distribution.

Having said the above, I should emphasize that I do believe that the open source Internet as we know it today does posses the properties and the attributes to empower the ‘little person’ or the ‘one-man-band’ to impose certain agendas (to some extend) on the large companies. In the open source Internet as a possible antidote to corporate media hegemony I have argued exactly this. The open source Internet, as a result of open source movement, manifests itself as a possible antidote to the corporate media hegemony, not only in the US but also throughout the world.

What makes the open source Internet as a possible antidote to the corporate media hegemony? It is its open nature: open content and open communication. Unless the access points and other ISPs start policing anything and everything that is published and communicated via personal web pages, blogs, and e-mails, the possibility will always exists for the masses to communicate, organize and set the agendas for the discourse and therefore push large media corporations to seriously address them. This however requires a critical mass. And, unless the agenda of the critical mass is in line with the ‘profit’ agenda’s of the large corporations, they will be pushed in the sidelines, away from the eyes and the minds of the public discourse.

In any case, it is quiet apparent that the use of the open source Internet has provided a venue for the ‘little persons’ to make a difference and be heard. The blogging has provided another genre and a unique venue for the ‘little persons’ to communicate and set the agenda(s).

Certainly, so far the large media corporations have appropriated any such capabilities and properties of the Internet exclusively for ‘profit making’. How is this different than Australia?

Is blogging any different such that to escape the ‘profit making’ machinery of the large media corporations? Only history will tell…

open content in education

| Permalink

Collaborative development of open content: A process model to unlock the potential for African universities by Derek Keats

"Given the cost of content, the under-resourcing of universities and the scattered nature of expertise in Africa, the collaborative development of open content seems like a useful way to get high-quality, locally-relevant content for using to enhance teaching-and-learning. However, there is currently no published operational model to guide institutions or individuals in creating collaborative open content projects. This paper examines lessons learned from open source software development and uses these lessons to build the foundations of a process model for the collaborative development of open content."

The Center of Open Source & Government

| Permalink

Roll Back the FCC's Rule Changes

| Permalink

ACTION ALERT: Roll Back the FCC's Rule Changes

"Over the protests of hundreds of thousands of Americans, a range of public interest advocacy groups and two dissenting Democratic commissioners, the FCC on June 2 voted to repeal or weaken some of the few remaining checks on the dominance of big media companies. Attention now moves to Congress, as a number of lawmakers attempt to roll back at least some of the changes, some of which now appear to be more drastic than previously reported.

For instance, most media outlets have reported that under the FCC's new rules, a single company can now own TV stations that reach 45 percent of U.S. households, up from 35 percent. Because of a little-reported loophole, however, a single company could actually reach far more people-- in theory, as much as 90 percent of U.S. viewers (New York Times, 5/13/03). "

From The Blogging Revolution:

"Think about it for a minute. Why not build an online presence with your daily musings and then sell your first book through print-on-demand technology direct from your Web site? Why should established writers go to newspapers and magazines to get an essay published, when they can simply write it themselves, convert it into a .pdf file, and charge a few bucks per download? Just as magazine and newspaper editors are slinking off into the sunset, so too might all the agents and editors and publishers in the book market.

This, at least, is the idea: a publishing revolution more profound than anything since the printing press. Blogger could be to words what Napster was to music - except this time, it'll really work. Check back in a couple of years to see whether this is yet another concept that online reality has had the temerity to destroy."

Indeed, established writers do not have to go to newspapers, magazines and book publishers for wide distribution of their writings. However, the fact that they are established is the key point. How does one become an established writer only through online presence? 

An online presence does not have the credibility and the authority of the printer press, at least not yet. And, sooner or later such credibility and authority, unlike until now, would probably come directly via the web. Linking and ranking is perhaps one way. Some sort of online-publisher-certification might appear here and there. Nevertheless, if the online work itself is to be the basis for authority and credibility, the blogging is just leading it.

Additionally, wide distribution of writings is usually one of the key reasons why writers prefer one publishing venue over another. The point is to be read. So, unless an online presence attracts a massive audience, how can a writer be widely distributed? A good comparison would be to the innovation of the printed press and the rise of the book as an agency for social change.

Perhaps it is neither a revolution nor evolution... it is both at the same time as previous work, independently of the medium of distribution, almost certainly affects the future works of an author... First however, instances of purely online credibility and authority have to happen.... and if it already has not happened, it probably will soon. Second, a critical mass is needed both for gaining the credibility and authority and at the same time having the readership. This also will take a mixture of online and offline publishing for some time.

Would this make blogging to online publishing as the printing press was to the book?

The Book and Information: Social and historical context and forces

def: information competence

| Permalink

information competence
"the ability to find, evaluate, use, and communicate information in all of its various formats" - Work Group on Information Competence, Commission on Learning Resources and Instructional Technology (CLRIT), California State University (CSU) system. Information Competence in the CSU: A Report. Dec. 1995. http://www.csupomona.edu/~library/InfoComp/definition.html

A definition recommended by the Work Group is that information competence is the fusing or the integration of library literacy, computer literacy, media literacy, technological literacy, ethics, critical thinking, and communication skills.

open access to scientific information

| Permalink | 3 TrackBacks

Free Public Access to Science—Will It Happen? (July 7, 2003) — If Congressman Martin Sabo of Minnesota has his way, the results of federally funded research in science and medicine will be available freely to all. Rep. Sabo introduced a bill, Public Access to Science Act, HR 2613, on June 26, 2003. The proposed legislation states that copyright protection is not allowed for any work produced as a result of federally funded research. The legislation further states: “the Internet makes it possible for this information to be promptly available not only to every scientist and physician who could use it to further the public good, but to every person with access to the Internet at home, in school, or in a library.”

Derk Haank, former chairman of Elsevier Science, disagrees with the views of Eisen and Rep. Sabo. He said: “The material has to be available for the people who need it. And when I talk about people who need it, I am not talking about the general public, because we are talking here about scientific information, specialist information. People who want to use this and who need it are part of an institute. You don’t do it as a self-proclaimed intellectual in your garden shed.”(Kaser, Dick. “Ghost in A Bottle”, Information Today, February 2002.)

Derk Haank's preconceived notion that products of scientific research are not needed and not usable by the general public is definitely wrong. It is perhaps true that in the current settings of publishing scientific information the general public is perceived as not very interested; the scientific information is not readily available. However, if the infrastructure is proper and the scientific information is readily available, that would remove one of the barriers of accessing scientific information by the general public.

Then, the most challenging task would be to inform the general public that such information is freely available and can be of great use.

def: information literacy

| Permalink

Information Literacy: The ability to know when there is a need for information, to be able to identify, locate, evaluate, and effectively use that information for the issue or problem at hand.

From Definitions of Information Literacy and Related Terms

'quality' content is always a winner

| Permalink

From Search Results Clogged by Blogs:

"Bloggers attribute prominent placement to the frequency with which they publish new material and the fact that other sites often link to their blogs. These are two factors most search engines take into account when determining rankings."

Needless to say, content quality is directly related to the number of links other blogs will link to another blog. The challenge arises from the ambiguity in deciding what good quality content is. Perhaps it is related to the understanding of relevance. When we decide that a particular content is of good quality, we usually mean that:
• it either informs us about a topics, event, or a concept
• it has presented something in a way easily readable / understandable
• it is relevant to pertinent problem at hand
• raises a questions or a viewpoint from a unique and/or challenging perspective
• etc.
Even if the content has informed us of something we knew before, perhaps it has done it in a way we judge better than others and therefore we would like others to see it as well.

Each of the points I’ve made above unfortunately (or fortunately) use a word/concept that is ambiguous by definition. What is the meaning of ‘inform’, ‘easily readable’, ‘relevant’, and ‘challenging perspective’? Different individuals will probably provide different answers to these questions using their subjective judgments.

Information Relevance

def: media literacy

| Permalink

Media Literacy: The ability to decode, analyze, evaluate, and produce communication in a variety of forms.

From Definitions of Information Literacy and Related Terms

book review: 'Our Own Devices': Smothered by Invention

| Permalink

David Pogue reviews Edward Tenner's book 'Our Own Devices' emphasizing on the apparent fact that our own technological inventions are changing the way we live.

It would be an interesting book to read.

blogging is not for journalism only

| Permalink

Blogging Goes Legit, Sort Of entertains the idea that blogging is somehow exclusively related to journalism.

Blogging can be about journalism, it is ... but it is more than that ...

In my previous papers (media & communication) I tried to show that the open source concept/phenomenon and its communicative elements are innovative ideas, giving rise to open communication technology, enabling the masses to communicate free from elite’s control, possibly acting as antidotes to hegemonic ideology. To do so, I applied the constitutive view of communication, suggesting that open source is enabler of ‘free dissemination’ and open communication.

Recognizing Ranganathan’s five laws of Library Science and their underlying concepts as powerful inspirations for social change, I would like analyze the open source software, as defined by the Open Source Initiative (OSI), and its congruency with the five laws. If the underlying concepts upon which the five laws are built had such profound impact on our society, then the proponents of the open source movement can learn a thing or two. The actual definition of open source software is a lengthy one; instead, a summarized definition from the OSI’s Frequently Asked Questions (FAQ) follows:

“Open source promotes software reliability and quality by supporting independent peer review and rapid evolution of source code. To be OSI certified the software must be distributed under a license that guarantees the right to read, redistribute, modify, and use the software freely” (The OSI).

A ‘book’ is the basic element of Ranaganathans laws: it contains objective knowledge. This calls for defining the comparative basic element of software development. Therefore, I will take the term ‘software’ to be the basic element: it contains objective knowledge. I have used the term ‘software’ loosely as it can mean a software product or software modules that can be used to build other software products. Respectively, the Five Laws of the ‘Software Library’ could be:

The First Law

Books are for use

(Ranganathan, p. 26)

Software is for use

The Second Law

Every readers his or her book
(or Books are for all)
(Ranganathan, p. 81).

Every user his or her software

(or software is for all)

The Third Law

Every book its reader.
(Ranganathan, p. 258)

Every software its user

The Fourth Law

Save the time of the reader.
(Ranganathan, p. 287)

Save the time of the user

The Fifth Law

Library is a growing organism

(Ranganathan, p. 326)

A software Library is a growing organism

Note: The American heritage Dictionary defines Library as it pertains to Computer Science in the following way: A collection of standard programs, routines, or subroutines, often related to a specific application, that are available for general use.

blogs, minds, documents, representations

| Permalink

In Mind Share the Wired Magazine 11.06 defines Blog Space as Public Storage For Wisdom, Ignorance, and Everything in Between:

"What happens when you start seeing the Web as a matrix of minds, not documents?"

"Your mind becomes a part of the space as well. Your own personal site becomes an extension of your memory, as in Vannevar Bush's vision of the Memex, but your memories also become part of the Web's collective intelligence."

I admire the above 'insight' but the article misses to explain how does the 'matrix of minds' happen, if at all? Indeed, the blogs are a new phenomenon in the Internet space but they are not minds, the blogs are only representations of people minds (i.e. the thinking process) to some extend. Blog entries are documents also, albeit different type of documents with properties and attributes different than stand alone isolated documents (reports, articles, static web pages. etc.).

What is apparent though is that blogs differentiate themselves from the other types of document presented in the Internet space by their open nature, their fluidity, and their real time links/relationship with other blogs.

Because of this openness, fluidity, and interconnection [see: how blogs affect each other], the level and the nature of the representation of author's knowledge deposited in the digital information object (i.e. blog or blog entry) is more extensive and of higher level, making the blogs more valuable as resources.

on weblog ethics

| Permalink

On weblog ethics from the weblog handbook by Rebecca Blood.

"Weblog Ethics
Weblogs are the mavericks of the online world. Two of their greatest strengths are their ability to filter and disseminate information to a widely dispersed audience, and their position outside the mainstream of mass media. Beholden to no one, weblogs point to, comment on, and spread information according to their own, quirky criteria.

The weblog network's potential influence may be the real reason mainstream news organizations have begun investigating the phenomenon, and it probably underlies much of the talk about weblogs as journalism. Webloggers may not think in terms of control and influence, but commercial media do. Mass media seeks, above all, to gain a wide audience. Advertising revenues, the lifeblood of any professional publication or broadcast, depend on the size of that publication's audience. Content, from a business standpoint, is there only to deliver eyeballs to advertisers, whether the medium is paper or television."

the experience cube: from information to knowledge

| Permalink

I think the experience cube denotes the fact that the movement from information to knowledge (an information-as-thing viewpoint of information) cannot be attained solely through the manipulation of information objects and/or knowledge artifacts. It shows the 'thing' necessary beyond what is represented in knowledge artifacts. The experience cube partially resides (or is part of) in Popper's World II.

nodes, or actors, or networks

| Permalink | 7 Comments

This is a response to jeremy's comments on actor construction? and a response entry (June 30, 2003) in his blog regarding the relationship of actors and networks as used/presented by the actor-network theory and methodology.

Jeremy: "i replied to this on his blog too, but ultimately my position is to rid oneself of the heirarchy of ontology involved in differentiating actors, and just look at the networks. there really are no actors, because then there is no differences amongst actors, only nodes where networks conjoin.

keeping in mind though that this is just my interpretation of several texts, mainly latour, law, then adding some norbert wiener. most people really want to differentiate between actors, I'm unconvinced that it is as important as kant tells us."

If the nodes are where the networks conjoin, than it might be this that many term an actor. Anyways, what is a network then? The following definition is one of many provided by the American Heritage Dictionary about a network: “An extended group of people with similar interests or concerns who interact and remain in informal contact for mutual assistance or support”. In this definition (and other definitions related to computer systems/networks) two distinct entities are identifiable around the concept of interaction: the channel of communication and the elements that enact these channels.

So, a network by itself is a complex element (or entity) composed of links and the elements that enact these links. Some may call these elements actors, others may call them nodes.

As far as semantics is concerns, we could be talking only of networks (at different levels due to their complexity and their relation to their surroundings) or only of actors (and we will have to differentiate between different actors and their levels). Included in here will be the channels of interactions (or the links) as complex actors or as complex networks.

Nevertheless, it appears that for such mode of explanation a distinction needs to be made between the entities and the process of communication that links those entities.

If nodes are to be taken only as passive entities where the links (or networks) conjoin, without the potentiality to act, it would seems that the nodes are only constructs with acquired properties and attributes resulting from their relative position in the network or networks. This perhaps is so for non-human entities. However, it is more then evident that humans as nodes in a network or networks are not passive even though some of the properties and/or attributes of the human node might be acquired as a result of the position in the relevant network(s). In addition, non-human nodes also contain intrinsic (relatively speaking) properties and attributes that are beyond the constructability of the relative network(s). Through these relative intrinsic properties (acquired from other outside networks) non-human actors (or nodes) are able to affect the ‘constructions’ of network(s).

defining the ingredients of actor-network and open-content open-communication

About me ...

| Permalink | 2 Comments | 1 TrackBack

In these few paragraphs, I would like to summaries my interests that led towards my decision to start the Ph.D. program here at SCILS - Rutgers University.

After finishing my masters in telecommunication engineering at Stevens Institute of Technology in May 1997, I embarked upon a career as Information Systems Architect / Engineer / Analyst, primarily in the telecommunication industry. During my masters program at Stevens-Tech I worked as a teaching assistant and instructor where I came in close contact with freshman students while teaching the basics of Internet, HTML, electronics and microprocessors labs, C++, etc. In addition to this I worked with Engineering Information as content designer and consultant on their home pages.  For further details on my employment history please see my  resume.

Throughout my educational and working career, I have been always puzzled by the fact that many tasks and processes are performed very inefficiently, when it is almost obvious that there is an efficient or a better way of performing the same. This becomes even more evident with projects that span across multiple business units in a particular organization. The lack of communication and the miscommunication among the participants can be identified as major obstacles. This is partly because employees keep the knowledge to themselves believing that if “Knowledge is Power” they should not share it easily. The other element seemingly results from the fact that employees do not necessarily know what others around them know, hence reinventing the wheel all to often. Certainly, an organization can perform better if it tries to discover and learn what it actually knows (“If We Just Knew What We Know”) and apply the knowledge appropriately.

Why a Ph.D. ?
In the attempt to find an answer and study the reasons behind such lack of
communication and the miscommunication among team members (and across various business units), I came across information technology related readings dealing with groupware and collaboration tools, online discussion forums, virtual discussion groups, virtual teams, knowledge management systems and processes, decision support systems, etc. These tools and processes were described as capable to play an important role in discovering, sharing and utilizing the knowledge, experiences and skills, with the ability to effectively minimize the gap between the knowledge available for utilization and how much the participants know about its availability at a particular instance.

Having said the above, my particular interest at the start of my Ph.D. were directed towards:

  1. The utilization of information systems and the related information technology tools, and their impact on individuals, society and organizations.

  2. Knowledge Management (KM) as a process for discovering, creating and sharing knowledge and its related use as a tool to drive organizations towards learning organizations.

  3. The Internet as information and knowledge exchange medium and its impact on grassroots activities to further human rights and freedoms around the world by informing and influencing governments and other relevant international institutions.

The teaching assistant (TA) experience from Stevens Institute of Technology added to my desire to further my education such that I can work in the academia. I liked teaching, being with the students, answering their questions. The Ph.D. degree would enable this opportunity and provide the venue for serious theoretical study.

Why a Ph.D. @ SCILS?
After extensive search for a Ph.D. program that would engage me in a discourse related to the above matters of interest to me, I came to realize that the Ph.D. program in Information Science at the School of Communication Information and Library Studies at Rutgers provide versatile yet specific courses taught by renown faculty in their respective field of study.

I'm in, what now (my questions in Fall 2003)?
I guess my main concern is deciding on an area I would like to focus going forward. Before coming to SCILS (started part time in Fall 2001) and during the first semester, I was mostly thinking knowledge management and collaboration tools for knowledge management. While I'm still interested in knowledge management, in the past few semesters I've been exposed to many other interesting research problems. The challenge I face now is deciding on the research problem/area that I would like to continue further. How does one chose among few competing interests?

One good way of finding the strengths and revealing the interests that are worth pursuing further, is to read the papers and research projects written/done for the previous classes, in order to asses any patterns of interest and discourses presented in them. If we wrote meaningful papers, there ought to be some revealing patterns. :)

Research interests:
From the courses I have taken so far (as of Spring 2003), 601, 610, 663, and 612 have helped me discover, identify, and narrow my research interests. Certainly, the topics covered in the Human Information Behavior (HIB) and the Seminar in Information Studies class were challenging and academically most appealing. At this point, it appears that my research interests are closely related to the type of material covered in the HIB class. I’m especially interested in the interplay between information and information systems, and the social structures within which they are imbedded.

To this extend, articles and materials related to knowledge management, collaboration, information systems design, the actor-network theory and methodology, and social constructionism, were the most illuminative and informative.

I’m looking forward to further narrow my research interests in the next two semesters (Fall 2003 and Spring 2004).

Center for Digital Discourse and Culture

"The Center for Digital Discourse and Culture (CDDC) is a college-level center at Virginia Polytechnic Institute and State University in the College of Arts and Sciences. Working with faculty in the Virginia Tech Cyberschool, the CDDC provides one of the world's first university based digital points-of-publication for new forms of scholarly communication, academic research, and cultural analysis. At the same time, it supports the continuation of traditional research practices, including scholarly peer review, academic freedom, network formation, and intellectual experimentation. Our aim is to be open to all forms of cultural, ideological, methodological, and scientific discourse, while encouraging diversity, interdisciplinarity, and academic excellence."

By Mentor Cana, PhD
more info at LinkedIn
email: mcana {[at]} kmentor {[dot]} com

About this Archive

This page is an archive of entries from July 2003 listed from newest to oldest.

June 2003 is the previous archive.

August 2003 is the next archive.

Find recent content on the main index or look in the archives to find all content.