Question:
what is an unobtrusive measure in simple terms?
2007-09-10 00:27:46 UTC
can I have examples too please.
Six answers:
2007-09-10 00:49:26 UTC
*

************

**************************

Unobtrusive Measures

Unobtrusive measures are measures that don't require the researcher to intrude in the research context. Direct and participant observation require that the researcher be physically present. This can lead the respondents to alter their behavior in order to look good in the eyes of the researcher. A questionnaire is an interruption in the natural stream of behavior. Respondents can get tired of filling out a survey or resentful of the questions asked.



Unobtrusive measurement presumably reduces the biases that result from the intrusion of the researcher or measurement instrument. However, unobtrusive measures reduce the degree the researcher has control over the type of data collected. For some constructs there may simply not be any available unobtrusive measures.



Three types of unobtrusive measurement are discussed here.



Indirect Measures

An indirect measure is an unobtrusive measure that occurs naturally in a research context. The researcher is able to collect the data without introducing any formal measurement procedure.



The types of indirect measures that may be available are limited only by the researcher's imagination and inventiveness. For instance, let's say you would like to measure the popularity of various exhibits in a museum. It may be possible to set up some type of mechanical measurement system that is invisible to the museum patrons. In one study, the system was simple. The museum installed new floor tiles in front of each exhibit they wanted a measurement on and, after a period of time, measured the wear-and-tear of the tiles as an indirect measure of patron traffic and interest. We might be able to improve on this approach considerably using electronic measures. We could, for instance, construct an electrical device that senses movement in front of an exhibit. Or we could place hidden cameras and code patron interest based on videotaped evidence.



One of my favorite indirect measures occurred in a study of radio station listening preferences. Rather than conducting an obtrusive survey or interview about favorite radio stations, the researchers went to local auto dealers and garages and checked all cars that were being serviced to see what station the radio was currently tuned to. In a similar manner, if you want to know magazine preferences, you might rummage through the trash of your sample or even stage a door-to-door magazine recycling effort.



These examples illustrate one of the most important points about indirect measures -- you have to be very careful about the ethics of this type of measurement. In an indirect measure you are, by definition, collecting information without the respondent's knowledge. In doing so, you may be violating their right to privacy and you are certainly not using informed consent. Of course, some types of information may be public and therefore not involve an invasion of privacy.



There may be times when an indirect measure is appropriate, readily available and ethical. Just as with all measurement, however, you should be sure to attempt to estimate the reliability and validity of the measures. For instance, collecting radio station preferences at two different time periods and correlating the results might be useful for assessing test-retest reliability. Or, you can include the indirect measure along with other direct measures of the same construct (perhaps in a pilot study) to help establish construct validity.



Content Analysis

Content analysis is the analysis of text documents. The analysis can be quantitative, qualitative or both. Typically, the major purpose of content analysis is to identify patterns in text. Content analysis is an extremely broad area of research. It includes:



Thematic analysis of text

The identification of themes or major ideas in a document or set of documents. The documents can be any kind of text including field notes, newspaper articles, technical papers or organizational memos.



Indexing

There are a wide variety of automated methods for rapidly indexing text documents. For instance, Key Words in Context (KWIC) analysis is a computer analysis of text data. A computer program scans the text and indexes all key words. A key word is any term in the text that is not included in an exception dictionary. Typically you would set up an exception dictionary that includes all non-essential words like "is", "and", and "of". All key words are alphabetized and are listed with the text that precedes and follows it so the researcher can see the word in the context in which it occurred in the text. In an analysis of interview text, for instance, one could easily identify all uses of the term "abuse" and the context in which they were used.



Quantitative descriptive analysis

Here the purpose is to describe features of the text quantitatively. For instance, you might want to find out which words or phrases were used most frequently in the text. Again, this type of analysis is most often done directly with computer programs.



Content analysis has several problems you should keep in mind. First, you are limited to the types of information available in text form. If you are studying the way a news story is being handled by the news media, you probably would have a ready population of news stories from which you could sample. However, if you are interested in studying people's views on capital punishment, you are less likely to find an archive of text documents that would be appropriate. Second, you have to be especially careful with sampling in order to avoid bias. For instance, a study of current research on methods of treatment for cancer might use the published literature as the population. This would leave out both the writing on cancer that did not get published for one reason or another as well as the most recent work that has not yet been published. Finally, you have to be careful about interpreting results of automated content analyses. A computer program cannot determine what someone meant by a term or phrase. It is relatively easy in a large analysis to misinterpret a result because you did not take into account the subtleties of meaning.



However, content analysis has the advantage of being unobtrusive and, depending on whether automated methods exist, can be a relatively rapid method for analyzing large amounts of text.



Secondary Analysis of Data

Secondary analysis, like content analysis, makes use of already existing sources of data. However, secondary analysis typically refers to the re-analysis of quantitative data rather than text.



In our modern world there is an unbelievable mass of data that is routinely collected by governments, businesses, schools, and other organizations. Much of this information is stored in electronic databases that can be accessed and analyzed. In addition, many research projects store their raw data in electronic form in computer archives so that others can also analyze the data. Among the data available for secondary analysis is:



census bureau data

crime records

standardized testing data

economic data

consumer data

Secondary analysis often involves combining information from multiple databases to examine research questions. For example, you might join crime data with census information to assess patterns in criminal behavior by geographic location and group.



Secondary analysis has several advantages. First, it is efficient. It makes use of data that were already collected by someone else. It is the research equivalent of recycling. Second, it often allows you to extend the scope of your study considerably. In many small research projects it is impossible to consider taking a national sample because of the costs involved. Many archived databases are already national in scope and, by using them, you can leverage a relatively small budget into a much broader study than if you collected the data yourself.



However, secondary analysis is not without difficulties. Frequently it is no trivial matter to access and link data from large complex databases. Often the researcher has to make assumptions about what data to combine and which variables are appropriately aggregated into indexes. Perhaps more importantly, when you use data collected by others you often don't know what problems occurred in the original data collection. Large, well-financed national studies are usually documented quite thoroughly, but even detailed documentation of procedures is often no substitute for direct experience collecting data.



One of the most important and least utilized purposes of secondary analysis is to replicate prior research findings. In any original data analysis there is the potential for errors. In addition, each data analyst tends to approach the analysis from their own perspective using analytic tools they are familiar with. In most research the data are analyzed only once by the original research team. It seems an awful waste. Data that might have taken months or years to collect is only examined once in a relatively brief way and from one analyst's perspective. In social research we generally do a terrible job of documenting and archiving the data from individual studies and making these available in electronic form for others to re-analyze. And, we tend to give little professional credit to studies that are re-analyses. Nevertheless, in the hard sciences the tradition of replicability of results is a critical one and we in the applied social sciences could benefit by directing more of our efforts to secondary analysis of existing data.
ramseur
2016-10-28 23:25:52 UTC
Definition Of Unobtrusive
Jessica
2016-03-14 18:22:37 UTC
In order to calculate how far away a star is, astronomers use a method called parallax. Because of the Earth's revolution about the sun, near stars seem to shift their position against the farther stars. This is called parallax shift. By observing the distance of the shift and knowing the diameter of the Earth's orbit, astronomers are able to calculate the parallax angle across the sky. The smaller the parallax shift, the farther away from earth the star is. This method is only accurate for stars within a few hundred light-years of Earth. When the stars are very far away, the parallax shift is too small to measure. The method of measuring distance to stars beyond 100 light-years is to use Cepheid variable stars. These stars change in brightness over time, which allows astronomers to figure out the true brightness. Comparing the apparent brightness of the star to the true brightness allows the astronomer to calculate the distance to the star. This method was discovered by American astronomer Henrietta Leavitt in 1912 and used in the early part of the century to find distances to many globular clusters.
2007-09-10 00:50:43 UTC
It's a measurement which doesn't noticeably affect the value of whatever it is you are measuring.



One example is measuring the speed of a train, clocking the time as the front of your carriage goes past a marker and measuring elapsed time till it goes past another then calculating average speed,knowing the distance between markers, is fairly unobtrusive : hauling on the emergency cord half way through then asking the guard what speed they were doing, isn't.
April
2016-04-03 02:29:30 UTC
For the best answers, search on this site https://shorturl.im/avmKj



That's about 50 times as far away as the next nearest galaxy to us, M31 in Andromeda. At that distance, the galaxy which the star is part of would just be a smudge, and a nova or supernova is the only star in it you could pick out as a star. Certain characteristics of its "spectrum" reveal how much matter it contains and what kind of reaction has caused it to go nova. That lets you calculate the absolute light energy which it should be emitting, and the difference between that and what you can actually see is caused only by its distance, which you can then calculate - allowing for dark matter, of course. After that, you can look at the red shift of its spectrum, caused by the speed it is receding at, and plot another point on Hubble's famous straight-line graph of recession speed versus galactic distance. With enough of Hubble's graph plotted in this way, you can determine any visible galaxy's distance by the red shift of the light emitted by all its ordinary stars, without waiting for one of them to go nova.
Deck
2015-08-10 07:40:28 UTC
This Site Might Help You.



RE:

what is an unobtrusive measure in simple terms?

can I have examples too please.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...