|
Normative standards for
review of IS papers
ISWorld posting by
Dr. Eng. Manuel Mora
Associate Professor
Dept. of Information Systems
Universidad Autonoma de Aguascalientes
www.uaa.mx
Dear ISWorld colleagues:
Past weeks I posted to ISWorld a petition of ideas and information about normative standards for review of IS
papers, considering:
a) the differences between the field of Information Systems vs Computer
Sciences concerning to the "object of study"
b) the variety of research approaches or research methods used
c) ethical issues regarding self-evaluations from reviewers on
the expertise level of the specific topic to review
Three very interesting and sound answers & comments were received. As it is costumed in ISWorld list I report them below. (I apologize by
do not publish in a website) Sincerely,
Dr. Manuel Mora T.
------------------ ANSWERS & COMMENTS ----------------------------
--------------------------------------------------------------------------------------------------------
From: <Mike.Metcalfe@unisa.edu.au>
Dr Mike Metcalfe, Associate Research Professor, University of South Australia
(City West), Adelaide, 5000: Tel: 618 8302 0268
http://www.business.unisa.edu.au/cobar/researchgroups/irg/irg.htm
>>>
I agree this is an important issue for conference organisors. It also requires careful thought on the purpose of the conference. How does the
conference differencial itself from other conferences except by topic and from journals.
This sense of purpose needs to be communicated to the reviewers. However, more importantly the inexperienced reviewers need to be told how to
critique the paper. On methodology may not be sufficient. My approach is to ask if the paper has the following qualities:
A justified knowledge claim Appreciation it could be wrong? (inc. unethical)
Explanation of how it is generalisable. Explanation of how it offers novel insight
Explanations of how it might be empirically falsified. Explanation of how it leads to improvements to the human condition?
Appreciation it is one of many possible interpretations. Being internally consistent, ie its research techniques aligns with what the
article is advising.
I supose the idea is to be explicit about how exactly reviewers are to be 'critical'. I have a forthcoming article in Informal Logic entitled "13
ways to critique an article" which draws on the multiple perspective literature.
--------------------------------------------------------------------------------------------------------
From: “Wynn, Eleanor" <eleanor.wynn@intel.com>
Editor of INFORMATION TECHNOLOGY & PEOPLE Journal Manager
&
Knowledge Mapping, Intel Information Technology,
>>>
Review standards in practice will depend upon the mission and scope of each journal as to topic range and accepted methodology.
Information Technology & People considers papers with a variety of methods, topics focused mostly on organizational issues and including
globalization and developing countries research, per the web site.
http://fernando.emeraldinsight.com/vl=1977630/cl=25/nw=1/rpsv/itp.htm
In our case, the journal is looking for innovative material and theory, so we may judge differently than a more conservative journal.
The Editorial Board is normally chosen to express aspects of the journal's mission, with some distribution of expertise around the
desired topic areas.
In our case, we want material that is new or newsworthy either in terms of the topic, theory or method. Within its method it must have credible
evidence per practitioners of that method, eg if statistical then stats must be technically good, if descriptive, then provide enough
observational detail to make a credible case with the story told well and the theory exemplified adequately. Analysis must be interesting and
we would reject, or ask for major revision of a paper that was methodologically good but not saying anything new. Also a paper can be
pretty good in terms of material, but have no informing theory, or theory not well developed, or theory not exemplified by the data. In
that case, the paper again is not successful.
So, if a paper did not align with the mission, did not have good evidence of some kind of method, did not tell an interesting story, has
sketchy theory or none, or old, then it would be rejected. On the other hand if it had some strengths of theory or case material or data, but
weakly developed or not well written or not exploiting the material, then it would be revise and resubmit. A paper that needs only minor
revisions would have to have an excellent theoretical introduction, fit into an ongoing discussion and reflect that literature up to date,
present compelling evidence either descriptive or quantitative and pull all that together into a well written flow of argument. So, even good
people can fail to make this grade on the first round. Rejection does not imply that the author is not a good scholar, but has not crafted
this particular paper to full potential.
As far as reviewer credentials, we try to know our reviewers. It is true that not all reviewers agree, and some even make bad mistakes. The
Associate Editor must then make a judgment. We recently had a reviewer recommend to an author that the author read some papers from ITP to
learn how to write material appropriate to the journal's standards . It happens that the author had
won one or more best paper awards from the journal! This is why we have 3 reviewers in most cases.
In any case, it's not perfect, but on the whole it works. When in doubt, resubmit or submit to another journal.
--------------------------------------------------------------------------------------------------------
From: alter@usfca.edu
Steven Alter, Ph.D.
Professor of Information Systems
University of San Francisco, School of Business and Management
San Francisco, CA 94117
>>>
You raise important issues concerning normative standands for reviewing IS
research. Papers I have submitted have received a wide range of positive and
negative reviews based on a wide range of reviewer interest and substantive
knowledge about the subject matter of particular papers. Although it is
disappointing to receive a negative review, a well-reasoned negative review
can valuable if it addresses the topics and methods in the paper rather than
just the topics and methodologies that the reviewer is interested in.
My most common difficulty with reviews (especially reviews for "refereed" conference submissions) is not actually about the three topics you mention,
but rather, about whether the reviewers are truly willing and able to engage
the paper and to say something meaningful about the paper beyond just rating
it on a set of numerical scales (thereby providing no meaningful feedback about how to improve the paper or the ideas it contains). Part of this is
related to the second and third issues you mention, the variety of research
methods and the willingness of reviewers to reflect on their qualifications
to review a particular paper, but those two issues capture only part of the
issue of whether reviewers have the time and energy to say something meaningful.
I have a longer comment about your first topic, the differences between the
field of Information Systems vs Computer Sciences concerning to the "object
of study."
Except for certain topics that clearly belong in CS because they are about
the theory, creation, or operation of hardware and software, there is no clear division between IS and CS. Yesterday I happened to be looking at a
number of articles and books related to information ecologies, social construction of technology, technological frames, virtual organizations, and
speech act theory. Are those topics in information systems? computer science? in something else? Within the IS literature itself there is
often no real distinction between what might seem to be basic concepts such as IS,
IT, IT artifacts, systems, technologies, business processes, information ecologies, etc. Meanwhile the CSCW, HCI, and ethnographic papers in the
computer science literature cover topics and research variables that some members of the IS community believe to be (not only outside of computer
science but even) outside of IS because those topics focus too much on people and organizations rather than IT artifacts and IT-specific variables.
Here is one relatively simple, three part classification scheme for topics
in the currently overlapping CS and IS realms :
..... 1. If the core topic is theory, creation, or operation of hardware and software (without little or no concern for how that hardware or software
will be used in organizations), the topic is within CS.
......2. If the topic is the processing of information by people and or computers or the development of systems that process information but do not
perform material work, the topic is within IS. (E.g, supply chains are material systems and therefore extend beyond the scope of IS, whereas just
the processing of information that supports a supply chain might be considered inside of IS as long as the actual handling and movement of good
is not considered. Similarly, the parts of ecommerce that are just about information processing might be viewed as part of IS, whereas the parts that
are about product development, inventory management, and distribution would
be outside of IS)
...... 3. If the topic is the creation, maintenance, operation, and impact
of systems in organizations, regardless of the extent to which the technology happens to be IT, the topic is "systems in organizations."
The CSCW and HCI researchers wouldn't like the above definition of CS because it would push them out of CS. The IS researchers who say they study
information systems but actually study systems in organizations might not like the distinction between "information systems" and "systems in
organizations" because it would reveal that they have branched out of IS per
se. The researchers in both cases are doing work that is very valuable but
face an academic classification problem because they are looking outside of
what were the core topics of their respective fields 30 years ago.
My particular view of this whole area is a bit extreme. I believe that the
IS field has evolved over time and that its current subject matter is best
summarized as "systems in organizations." (Most significant systems in organizations rely on IT so extensively that they can't operate effectively
without IT). I believe it is unnecessarily self-limiting and in the long term self-destructive for the IS community to try to restrict itself to "IT
artifacts" or the processing of information. Unless we consider IT artifacts
to be anything that IT touches, the long term result of restricting ourselves to IT artifacts will be the introduction of counterproductive bias
in our view of the world and marginalization of our research.
In case you are interested, I have attached two related papers that are currently being considered for publication. Their titles are:
... "18 Reasons Why IT-Reliant Work Systems Should Replace the IT Artifact
as the Core of the IS Field"
... "Sidestepping the IT Artifact, Scrapping the IS Silo, and Laying Claim
to 'Systems in Organizations' "
--------------------------------------------------------------------------------------------------------
------------------ END OF ANSWERS & COMMENTS ----------------------------
Dr. Eng. Manuel Mora
Associate Professor
Dept. of Information Systems
Universidad Autonoma de Aguascalientes
www.uaa.mx
The ISWorld LISTSERV is a service of the Association for Information Systems (http://www.aisnet.org). To unsubscribe, redirect, or change subscription options please go to http://lyris.isworld.org/. You are subscribed to isworld as: ronan.fitzpatrick@comp.dit.ie. Each Sender assumes responsibility that his or her message conforms to the ISWorld LISTSERV policy and conditions of use available at http://www.isworld.org/isworldlist/.
Home
|
Courses
& Course Material | Research |
Papers &
Publications | Models | Links
& Resources |
Staff
|
School
of Computing | Dublin Institute of
Technology |
Kevin Street
Site content copyright © 1996 to 2010 Ronan Fitzpatrick.
This
page was last updated 14 October 2010
|
|