It's easy to point to the personal computer and say, "Here at last is the great, global equalizer." With a computer and Internet access, instant communication and boundless information are at your fingertips.
Even lucrative high tech jobs can be yours: just hit the books, then pound the pavement. In today's tight hiring market, it no longer matters if you've been to college, or whether you're 40 or 14. If you only apply yourself, the doors will open.
Or so those already on the other side might have you believe. In reality, for about three-quarters of the world's population computing remains the province of the educated elite.
True: you don't need a sheepskin to master a computer. But you still need to know English.
The issue isn't documentation. Countless technical manuals are treated to quality translations worldwide each year. Nonetheless, the learning curve for computing remains significantly steeper for those without a fundamental understanding of English. The problem is that the machines themselves play favorites.
Consider, even with today's user-friendly graphical interfaces, how many modern idioms of computing presuppose an English-speaking user. Without knowledge of the underlying language, seemingly natural terms like "cut and paste" would sound as arcane as "megabyte" or "baud" do to many English-speakers. Acronyms like "WYSIWYG" and "USB" would appear little more than hieroglyphics.
Even fundamental operating system functions can pose problems. Take sorting a list of filenames, for example. If your language uses a Latin-based alphabet, the operation is obvious enough.
While it may be true that traditional Spanish treats the sequence "ch" as a single letter, to be filed between "c" and "d," the basic procedure remains the same as with English, and the error is minor.
But what of the millions who write using Chinese characters? To them, the idea of an "alphanumeric sort" has no meaning.
Attempts to replace words with graphical cues can't completely negate linguistic bias, either. If you've read books in English, a right-pointing arrow icon in a Web page or online documentation can be taken to mean "go on to the next page." In Japan, however, the meaning isn't immediately clear. Japanese books read from right to left.
And the more advanced your education in computing becomes, the more heavily you must rely on an understanding of English. For instance, without some level of English fluency computer programming is almost completely inaccessible.
While the natural and physical sciences have standardized much of their technical jargon around Latin, a dead language with no modern cultural leanings, computer scientists aren't so lucky. Even if you receive training in your native tongue, the problem of the machine code itself remains.
The earliest computer code was nothing more than rows of numbers. But as the complexity of software grew, code that more closely resembled human language became necessary.
With Silicon Valley calling the shots, the result was a lineage of English-like programming languages that continues to this day. While Java listings probably still read like some unintelligible mathematical script to the uninitiated, a closer look at Java's structure — keywords like "object," "string," "case," "while," "if," and "then" — reveals its linguistic bias.
With such a history of English orientation in computing, software vendors have a lot of catching up to do if they want to reach a more international audience. Most of the current focus lies in the area of "localization." A localized application is one that functions identically to its English counterpart, but with its interface translated into another language.
Apple Computer was the first to tackle localization seriously. The Mac OS introduced the idea of a "resource fork," a special file area where the menus, dialog boxes, alert messages, and other text data for an application could be stored separately from the program code itself.
This separation made it easier to store multiple localized copies of the data, and pick the right ones based on overall system settings. Apple also introduced various "Language Kits" for the Mac. These add-on packages included things like language-specific fonts and keyboard layouts, designed to localize the Mac OS itself.
Microsoft now ships localized versions of Windows as well. And more recently, the Java language was designed with multinational programming techniques in mind.
Though they're steps in the right direction, none of these measures is a magic wand to make computing more accessible to non-English-speakers. Despite the availability of localized operating systems, few software vendors currently offer more than a handful of localized applications, Microsoft and Apple included.
And none have really tried to address the problem of language bias in computer code. Apple gave it a token effort. In keeping with the Mac OS's localization strategy, forward-thinking engineers designed the AppleScript automation language to accept plug-in syntax modules, called "Dialects."
Conceivably, Dialects could be developed that resemble any number of human languages. Still, 7 years after AppleScript was first introduced, the only selection available on the Dialect menu remains "AppleScript English."
Part of the problem lies in the tendency of computer vendors to think of their user base in terms of markets, rather than as beneficiaries of a tool. Businesses don't run on altruism. The critical question for a computer manufacturer is not whether its product enriches its users' lives, but whether it has reached that magical formula whereby a large number of consumers are enticed to purchase that product.
So far, market research reports that the vast majority of the audience for computing products is English-speaking. In one recent survey, for instance, only 8 percent of Hispanic American Internet users said they preferred Spanish-language Internet content, the rest choosing primarily English sites. It follows, then, that Spanish-language content needn't be a priority for American computer firms.
But in some ways, such statistics are a self-fulfilling prophecy. While the English language preference may be true of those American Hispanics online today, what is being done to reach out to those who so far don't consider computers a part of their lives?
A second survey of over 2,000 Hispanic American households showed that as many as 39 percent of those surveyed either didn't understand computers, or didn't see a need for a computer in their home. In keeping with this statistic, less than half of Hispanics in America have computers and Internet access in their homes today.
These figures make sense when you take into account that by and large, Hispanics and computers aren't speaking the same language. If you've already taken the step of learning English, the computer can be seen as a natural, even essential, tool. If you haven't, then an iMac might as well be an extraterrestrial.
Globally, the problem is much more acute. Worldwide, it's estimated that 80 percent of the data stored in computers is in English. In countries where English is the primary language, like the U.S. and Singapore, as many as 1 in 3 households own a computer.
In Taiwan, the same is true of only 1 in 20 households. And in China, the figure is as low as 1 in 100.
Such statistics hardly paint computers as the catalyst for global unity they've been claimed to be. Instead, they point to a definite gap between the "haves" and "have nots," when it comes to the benefits of the digital age. So long as computing favors an English-speaking user base, technology will remain a divisive force, rather than a unifying one.
So rather than more hyperbole, I'd like to see the industry put its money where its mouth is. It's time for new tools, new practices, new technologies, and new idioms, aimed at encouraging computing applications from all countries and language backgrounds — rather than shoehorning the field into an English-speaking, American way of life.
Only then can this technology truly be said to have encompassed a multicultural world, rather than merely overtaken it.