User:Trystan
A Censor’s Tool: Information Science concerns with a Wikimedia image filter
- When labeling is an attempt to prejudice attitudes, it is a censor's tool. – American Library Association
There is currently a referendum on what sort of personal filtering system for images the Wikimedia Foundation should adopt. The following discussion attempts to highlight areas of concern with this proposal from an information science perspective, and proposes a set of principles which could mitigate the negative effects of implementing such a system.
Warning labels and prejudicing users
For a filter to work, "controversial" areas need to be identified. As the Harris Report acknowledges, people's complaints often congeal into a few very dominant areas. Among these one is likely to find nudity, sex, profanity, violence, gays and lesbians, religious blasphemy, and a few others. However, any standards that try to objectively delineate these areas are inherently very subjective, arbitrary, and arguable. Moreover, the very act of labeling images based on majority standards - then suggesting through the filter interface that these areas are worthy of blocking - is an inherent limit on intellectual freedom.
The problem is particularly well-stated by the American Library Association:
- Labels on library materials may be viewpoint-neutral directional aids that save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor's tool. The American Library Association opposes labeling as a means of predisposing people's attitudes toward library materials.
- Prejudicial labels are designed to restrict access, based on a value judgment that the content, language or themes of the material, or the background or views of the creator(s) of the material, render it inappropriate or offensive for all or certain groups of users. The prejudicial label is used to warn, discourage or prohibit users or certain groups of users from accessing the material...
But doesn’t a warning label, you may say, empower the user further, by enabling them to select the material they want to access? The simple answer is no, it does not. At best, it conveys someone else’s preconception that a work merits alarm. It prejudices users against certain works based solely on perceived societal prejudices or other people’s complaints. As such, works that reflect minority sensibilities (including cultural, religious, and sexual minorities) are much more likely to be deemed controversial than those depicting things which the majority deems proper. In many cases, the warning label is a vehicle for institutionalizing and perpetuating bigotry. Many people find gay content, female sexuality, or religious criticism to be offensive. But a neutral information provider can not allow the prejudices of some to impact the provision of information to the individual. It is the central tenet of intellectual freedom, and the core of neutrality.
Categories are not filters
Both the Harris Report and the referendum itself rather uncritically suggest using the existing category system as the basis for filtering images. From an information science perspective, this is simply not possible, as it would fundamentally distort the category system into something new and less useful. Categories use descriptive classification to identify what an image is predominantly about, trying to capture its central and significant subject matter. An effective filter, on the other hand, flags anything which contains the offending content, however trivially or incidentally.
For example, we have a category about Nudes in Art. The implied scope of this category, based on how it has been used, is something like "artistic works primarily featuring the nude form." The scope is not "artistic works which contain any nudity whatsoever, however trivial." That would not be a useful scope for it to adopt. But if Nudes in Art becomes a warning label for people who want to filter out nudity, it would be entirely reasonable to expect that people would adopt the latter scope rather than the former. The community was given a descriptive classification tool and they became good categorizers; one can not give them a filter and not expect them to become good at filtering. (See the Addendum for another example.)
The two functions, descriptive classification and filtering, can not live comfortably together in the same categories. And because descriptive classification is an inherently difficult task, the simpler process of simply identifying and tagging content to filter would likely dominate. It is not, for example, a very strong response to the statement, "I have the WMF-endorsed right to filter nudity, this image contains some, so it should be added to the appropriate category to be filtered," to say, "Well, yes, but this image isn't really about nudity, it just contains some." So any categories used as part of the filtering system would cease to be effective as organizational and descriptive categories, and become instead broadly applied warning labels. One could certainly automatically populate the new warning label using existing categories, but they serve very different purposes and it would be vital to implement them separately.
Third-party censorship
So whether we create a parallel warning label system or allow our existing category system to be distorted into one, we will end up with new warning labels identifying all controversial content. And if we label images to empower self-censors, we necessarily empower third-party censors at the same time. While it is true that third parties can already use existing categories, as noted above, those categories are not designed with a censorial purpose in mind, and therefore will not make great filters. There are many images that contain 'controversial' content without this facet being reflected in their categories, simply because it is not central to the subject of the image. If we start using existing categories as warning labels, however, we put an ideal censor's tool into their hands. This is a heavy price that must be realized for the implementation of a filtering system.
Minimizing discrimintation
The only way of implementing an image filter that did not infringe intellectual freedom would be to implement one that did not use warning labels to identify controversial content. This could be done by creating an all-or-nothing image filter which let users either view or disable all images as the default, and then show or hide each individual image on an image-by-image basis with a personal blacklist/whitelist. If warning labels are, however, deemed a necessary part of the filter, they should at least be implemented in a way that minimizes the infringement of intellectual freedom.
One proposed quality of the image filter is that it strive to be culturally neutral. This is a laudable goal, but we should not understand it to mean "reflecting the majority views of as many different cultures as possible." We should also look at minority views within every culture. Additionally, when striving for this neutrality, we should not be focused primarily on what controversial areas to filter, but on the prejudicial effects that filtering causes.
For example, a filter that applies to Topfreedom and not to Barechested would inherently cast negative judgement on the former article, and would be incompatible with a project which is dedicated to providing a neutral point of view. Similarly, a filter targeted at a specific minority group, such as Depictions of homosexuality, would be of considerable prejudicial effect, notwithstanding that homosexuality is considered controversial by many.
Seven principles for implementation
Flowing from the above discussion, I think the following principles could lead to the establishment of workable warning labels for an image filter. They acknowledge this as an intellectual freedom limitation, which places the discourse in the right area of caution, acknowledging the cost of every label we add. They also seek to minimize the worst effects of warning labels in terms of prejudicing the user, namely, targeting or implicitly disadvantaging certain classes of people.
- We acknowledge that warning labels prejudice users against certain classes of content, and are therefore an infringement on intellectual freedom.[1]
- In a few extreme cases, this infringement on intellectual freedom is justified, in order to give users control over what images they see, where an objectively definable set of images can be shown to cause significant distress for many users.
- The scope of a warning label must be clearly and explicitly defined based on image content.
- In order to be a reasonable infringement of intellectual freedom, warning labels must be minimally discriminatory.
- They may not have the express purpose of filtering any group of people identifiable on the basis of race, national or ethnic origin, colour, religion, sex, age, mental or physical disability, sexual orientation, or other personal characteristic.
- Where disproportionate filtering of an identifiable group is likely to result from the implementation of a label, the scope of the label must be crafted to minimize this.
- We acknowledge that any system of warning labels will be inherently non-neutral and arbitrary, reflecting majority values while being over-inclusive and under-inclusive for others, as individuals have widely different expectations as to which, if any, groups of images should be filtered, and what images would fall within each group.
- We acknowledge that introducing warning labels, despite being for the express purpose of allowing personal choice, empowers third-party censors to make use of them.
- Categories are not warning labels. Because the task of labeling images that contain any controversial content is fundamentally different from the classification process of describing what an image is about, the warning label scheme used for filtration will be kept separate from the category scheme used to organize and describe images for retrieval.
So what labels could potentially meet this criteria? Well, I think the following might:
- Nudity, including any depictions of buttocks and genitalia. If we include nipples, we include both male and female. We also would not distinguish between artistic, educational, or pornogrpahic depcitions of nudity, as such distinctions are not objectively definable.
- Fresh wounds, medical procedures, and dead bodies, but excluding people with disabilities.
- Sacred religious depictions, such as depictions of Mohammed or temple garments. But not including practices of one group of people which another group feels to be blasphemous, sacrilegious or otherwise offensive.
The major drawback of the above principles is that they will lead to labels which are perhaps not the best match we could develop to meet user expectations (e.g. a lot of people would probably prefer to filter female nipples but not male nipples, only subjectively-defined 'non-artistic' nudity, or entire minority topics like homosexuality.) This is by design, as it flows inherently from valuing the principles of objectivity and minimal discrimination above user expectation.
--Trystan 03:29, 29 August 2011 (UTC)
Addendum
As another example of how using subject categories as filters would negatively impact the usefulness of subject categories, consider this depiction of human anatomy. We have several versions of this image, each with the labels in a different language. Most are filed under Category:Anatomy, Category:Human anatomy schemes, Category:Full nudity, Category:Nude photographs. Some versions, however, are only filed under the first two categories. It is not surprising that it has been described differently by different editors, as inter-cataloguer consistency is notoriously difficult to achieve. Obviously this consistency is something we want to achieve, and the way to do so is to develop clearer scope notes for each category that describe what does - and what does not - belong there.
What is the scope of Full nudity? Do these pictures fall within it? I would argue no; they are bare depictions of human anatomy, and lack the social dimension that nudity implies. Someone looking for depictions of nudity is not very likely, in my opinion, to be looking for anatomy schemes (e.g. nothing in w:Nudity talks about human anatomy charts, but rather nakedness in social settings.) Adding the nudity-related categories does not add any descriptive information already covered by the anatomy-related categories. Others would feel differently about how the categories should be applied, and ideally a consensus would develop one way or the other.
However, if adding the Full nudity category makes the difference between the image being filtered or not, the entire debate over what categories best describe the image becomes moot. Many editors will be strongly motivated to include the full nudity category, because they have been given the right to filter images of naked people. In a way this would improve consistency, but only by reducing the category to something considerably less useful; it is actually quite easy to label any and every image depicting any genitalia or breasts whatsoever, but it is not a very useful group of images for description or retrieval.--Trystan 17:03, 1 October 2011 (UTC)