ARTICLE 99: Research Methods for Ph. D. and Master’s Degree Studies: The Layout of the Thesis or Dissertation, Part 3 of 9: Ethics in Research Part 1 of 3

Written by Dr. Hannes Nel

Do people still care about the truth?

Did people ever care about the truth?

Are opinions more important than facts?

And what will the implications be if the truth is no longer important, and opinions are more important than facts?

I discuss the principles of ethics in research in this article.

Ethics are typically associated with morality, that is matters of right and wrong. You need to know, understand and accept the general consensus amongst academic researchers about what is acceptable and not acceptable in the conduct of scientific inquiry. The following principles are fundamental to an ethical approach to research:

  1. Research should always respect and protect the dignity of participants in research. This requires sensitivity, empathy, and accountability towards the target group for your research. The greater the vulnerability of the participants in the research (community, author, expert, etc.), the greater the obligation of the researcher to protect the participant. To this end, you as the researcher should:
    1. Ensure that you know and understand the values, cultures and protocols of your target group. It might be necessary to be academically or culturally qualified to work with some communities.
    1. Consult experts on communities if you lack the qualifications, knowledge and cultural background to work with them.
    1. Share your findings honestly, clearly, comprehensively and accountably with only those who are entitled to have access to the findings.
    1. Report your findings, and the limitations thereof, openly and honestly so that peers and the public in general may scrutinise and evaluate them, keeping in mind that your findings may probably only be shared with certain people.
    1. Acknowledge and point out the possibility of alternative interpretations.
    1. Respect the right of fellow researchers to work with different paradigms and research methods and accept it if they disagree with your finding and interpretation.
    1. Agree to disagree rather than to defend your point of view fanatically in an effort to sway others.
    1. Honour the authority of professional codes in specific disciplines.
    1. Refrain from using your position for undeserved, corrupt or otherwise dishonest personal gain.
  2. Because ‘harm’ is defined contextually, ethical principles are more likely to be understood inductively rather than applied universally. That is, rather than a one-size-fits-all approach, ethical decision-making is best approached through the application of practical judgement related to the specific context.
  3. When making ethical decisions, you should balance the rights of participants with the social benefits of the research and your right to conduct the research. In different contexts the rights of subjects may outweigh the benefits of research.
  4. The importance of adhering to ethical requirements is equally important regardless of which stage of the research process is involved.
  5. Ethical decision-making is a deliberate process, and you should consult as many people and resources as possible in the process, including fellow researchers, people participating in or familiar with the contexts or sites being studied, research review boards, ethic guidelines, published scholarships and where applicable, legal precedent.

With the above principles in mind, the ethical issues that impact the most on research are:

  1. The notion of truth.
  2. Axiology.
  3. Codes of consent.
  4. No harm to the participants.
  5. Trust.
  6. Deception.
  7. Analysis and reporting.
  8. Plagiarism.
  9. Legality.
  10. Professionalism.
  11. Research ethics and society.
  12. Copyright and intellectual property right.
  13. The originality of your research.
  14. Promulgation of results.

The notion of truth. Truth is largely governed by critical epistemology. Critical epistemology is an understanding of the relationship between power, cognitive reasoning and truth. This implies that the way we think about concepts, theory, philosophy and phenomena determines what we would regard as truth. You should uphold the epistemological principles that apply to all researchers, meaning that truth should be a product of logical reasoning and evidence. In terms of critical epistemology, however, we need to be careful – it is easy to twist your arguments to fit your preferences by describing them in terms of an unfounded epistemology. The need for and availability of power can erode logical truth. Sometimes writers and researchers work with a predetermined political agenda in mind, for example to gain support from a particular group or to promote a political objective, rather than to strive for scientific validity. You will only truly develop new knowledge or add to existing knowledge, that is, make a positive epistemological contribution to science, if you are objective and honest in your interpretation and analysis of information. This brings us to the epistemic imperative.

In the world of science our aim is to generate truthful (valid/plausible) descriptions and explanations of the world. This is called the epistemic intent of science. “Epistemic” is derived from episteme, the Greek word for “truthful knowledge”. We use “truthful” as a synonym for “valid” or “close approximation of the truth”. We accept knowledge to be accurate and true when we have sufficient reason to believe that it is a logical and motivated representation or explanation of a phenomenon, event or process. There needs to be enough evidence to support such claims. It mostly takes time to accumulate evidence and claims of truth must withstand repeated testing under various conditions in order to be accepted as valid or, at least, plausible.

“Instant verification” of a hypothesis or theory is largely impossible to achieve. Research takes place all the time, and scientific communities accept certain points of view, hypotheses or theories as valid and plausible, based on the best available evidence at a given point in time. However, new empirical evidence contradicting current “truth” can be revealed by new research at any time in the future. The obvious thing to do when this happens would be for scientists to revise their opinions and change their theories.

Commitment to “truth” is not the same as the search for certainty or infallible knowledge. Neither does it imply holding truth as absolute without any concern for time and space. The notions of “certainty” and “infallibility” would suggest that we can never be wrong. If we are to accept a particular point of view as “certain” or “infallible” we are in fact saying that no amount of new evidence can ever lead us to change our beliefs. This would obviously be a false stance, making a mockery of scientific enterprise. Life and the environment are dynamic concepts – not only do they change because of internal and external forces impacting on them, but we also discover flaws in our beliefs and perceptions. None of the paradigms that we discussed already go so far as to claim that truth is exact and perfectly final. Pre-modernism might be regarded as an exception by some. The commitment to true and valid knowledge is, therefore, not a search for infallible and absolute knowledge.

Even though we know that “truth” is a rather volatile concept, the “epistemic imperative” demands that researchers commit themselves to the pursuit of the most truthful claims about the world and the phenomena and events that have an impact on human beings. This has at least three implications:

  • The idea of an imperative implies that a type of “moral contract” has been entered into. It is neither optional nor negotiable. This “contract” is intrinsic to scientific inquiry. Every researcher and scientist should commit themselves to this contract. When you embark on a scientific project, or undertake any scientific enquiry, you tacitly agree to the epistemic imperative – to the search for truth. But the epistemic imperative is not merely an ideal or regulative principle. It has real consequences. This is evident in the way that the scientific community deals with any attempt to circumvent or violate the imperative.
  • The “epistemic imperative” is a commitment to an ideal. Its goal is to generate results and findings which are as valid or truthful as possible. The fact that it is first and foremost an ideal means that it might not always be attained in practice. All research, however, should represent steps closer to accuracy and truth. It seems to be unlikely, if not impossible, to achieve perfect accuracy and truth, amongst other things because of methodological problems, practical constraints (such as lack of resources) and a dynamic environment. We are often required to settle for results that are, at best, approximations to the truth.
  • The meaning that we attach to the concept “truth” presupposes a loose, somewhat metaphorical relationship between our scientific proposition and the world. Contrary to the classical notion that “truth” means that what we regard as reality, and what reality actually is, as being the same, we accept that this relationship is not that simple. The notion of “fit”, “articulation” or “modelling” is a more appropriate term for two reasons: Firstly, it suggests that a point of view can be relatively true. Articulation is not an absolute notion but allows for degrees of accuracy. Secondly, the term “articulation” can refer to the relationship between our points of view and the world (the traditional notions of “representation” or “correspondence”), or to the relationships between our points of view. In the latter’s case, we would use the term “coherence”. This means that “articulation”, “fit” or “modelling” is used to refer to both empirical and conceptual correspondence. When our conceptual system exhibits a high degree of internal coherence, we could also speak of the concepts as “fitting”, “being articulated” or “being modelled” well.

Summary

Ethics deal with matters of right and wrong.

The principles of an ethical approach to research are:

  1. Respect and protect the dignity of participants in research.
    1. Base ethical decision-making on the application of practical judgement in a specific context.
    1. Balance the rights of participants with the social benefits of the research and your right to conduct the research.
    1. Maintain and apply sound ethics throughout the research process.
    1. Treat all participants and stakeholders in your research ethically.

Truth is largely governed by critical epistemology.

It should be the product of logical reasoning and evidence.

The need for and availability of power can erode logical truth.

Always keep the epistemic imperative in mind when conducting research.

The implications of the epistemic imperative are:

  1. A moral contract is intrinsic to scientific inquiry.
  2. All research should represent steps closer to accuracy and truth.
  3. Truth is not always absolute or timeless.

Close

On the questions that I posed in my introduction –

All people do not care about the truth.

But, as you know, this is nothing new.

Not all people seem to have the ability to foresee the consequences of dishonesty for individuals, families, communities, cities, countries, the world.

Ironically lack of visionary thinking has this nasty way of causing great damage to the myopic in the end.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 98: Research Methods for Ph. D. and Master’s Degree Studies: The Layout of the Thesis or Dissertation Part 2 of 9 Parts

Written by Dr. Hannes Nel

I discuss deconstruction and empirical generalisation in this article.

Is deconstruction just a euphemism for plagiarism?

After all, what we do when we deconstruct a concept, argument, knowledge or philosophy, is to take what somebody else said or wrote and change it to serve our own purpose.

You be the judge if deconstruction is theft or progression.

Deconstruction

Deconstruction is not an independent research method as such, but rather a way in which data that you collected for your research is ‘unpacked’ into more useful chunks that belong together and that can be articulated to the purpose of your research. To rearrange the data, you need to identify the right meanings for terminology and concepts.

Constructivism as a paradigm addresses deconstruction. Some academics are of the opinion that deconstruction belongs with post-structuralism. However, it is important to also discuss it separately as part of the process of research methodology seeing that it is necessary, regardless of your paradigmatic preference.

To clarify the difference between constructivism as a paradigm and deconstruction as a research method – constructivism deals with the way in which people perceive their research environment; deconstruction deals with the way in which you, as a researcher, will contextualise and articulate the research data that you collect to convey the ‘message’ of your investigation.

When deconstructing data that you collected, you will group them under headings and sub-headings that will enable you to offer the data in harmony with the purpose of your research, hopefully on a higher level of abstraction or at least in a more creative manner. When studying towards a doctoral degree you will need to ‘create’ new data, which will probably include some deconstruction.

When doing research for a master’s degree and even more so for a doctoral degree, you will need to group your data into a set of categories and transform the groupings into abstract types of philosophies and knowledge which you need to analyse further. Dedicated computer software enables you to code your data so that deconstruction is much easier to accomplish by just grouping pieces of information under specific codes and then analysing and recombining the information into new messages. In this manner you can reconstruct the data that you collected into a logical, accurate and authentic thesis or dissertation.

Deconstructing data is not about disclosing an already established, underlying or privileged truth, thereby committing plagiarism. Rather, it is about synthesising existing data in such a manner that the inherent truth of the data is extracted and offered as an alternative, higher level construction of reality. In the case of doctoral studies such deconstruction should lead to alternative meanings, aligned with the problem statement, problem question or hypothesis of the research.

It stands to reason that the products of a research deconstruction need to be tested by checking with readers, and by exploring with especially your study leader, the extent to which the set of deconstructed components as captured in your thesis or dissertation, is in line with the general usage and meaning of the components, while being articulated to the purpose and requirements of your research and contextualised to the scope and range of your research target group.

As is often the case with master’s and doctoral studies, the deconstructed information may apply more widely than just the target group for the study. The deconstructed data may not be limited to component meanings associated with only your abstracted categories as defined in your thesis or dissertation. How you group your data is up to you and you may test new concepts and their technical or academic definitions. The dominant logic of the process of deconstruction is abduction, although induction plays a part in testing the scope and range of the constructed concepts and their meaning in terms of a variety of related everyday meanings.

For the sake of efficiency, you will start with meaningful components that you already deconstructed previously. By linking subsets of components, according to plausible themes, which should be the problem statement or hypothesis of your research broken down into abstracted categories, you can produce a compact set of concepts and associated academic meanings articulated to the purpose of your research. These ‘sets of concepts’ are the typologies through which you communicate your arguments in a thesis or dissertation.

Typologies not only provide descriptions but also enable a clear exchange of deeper understanding about the meanings of words and concepts with which you work in your thesis or dissertation. Hence, typologies answer ‘what’ questions but not ‘why’ questions. Stated differently, your typologies reflect the ontology of your research, which you will need as the foundation for the epistemology, which would be your discussion, analysis and explanations of your arguments.

The epistemology of your thesis or dissertation proposes and tests discriminating insights about associations between elements of the regulatory and the primary ‘why’ questions. Because a theory or argument should at least hold across the same for your research, the testing should be applied to each unit of a selected sample to ensure validity with a reasonable probability of being accurate. You will not statistically calculate the probability that your sample is large enough to provide a good measure of accuracy when conducting qualitative research. However, you should take great pains in ensuring accuracy of your findings, for example by making your sample as large as possible, consulting as many different sources of information as you can reasonably obtain, asking readers for comment, arranging focus groups, etc.

Empirical generalisation

Empirical generalisation should not be confused with empiricism, which is a paradigm, as you should know by now. Empirical generalisation is studies based on the collection and presentation of evidence to prove a hypothesis or claim in the form of a problem statement or question. The evidence needs to be shown to be accurate, valid and credible. As such it represents the most basic requirements for qualitative research.

Empirical research mostly refers to evidence that can be observed and measured, which implies quantitative research. It can be directed at the ontology of a phenomenon, requiring you to focus on “what”, as well as the epistemology of phenomena, requiring answers to questions like “how many?”; “why?”; “what are the results?”; “what is the effect?”; and “what caused it?”.

Summary

Deconstruction:

  1. Is not an independent research method.
  2. Is used to group and articulate data to the purpose of research.
  3. Fits in well with constructivism.
  4. Can be rendered efficient through coding.
  5. Synthesises existing data to identify the inherent truth in the data.
  6. Needs to be checked by other stakeholders in the research.

On doctoral level, you will:

  1. Create new data from existing data.
  2. Escalate data to a higher level of abstraction.
  3. Develop or identify alternative meanings for words and concepts aligned with the problem statement, research question or hypothesis for your research.
  4. Mostly use induction.

On master’s degree level, you will:

  1. Deconstruct data to make it more creative.
  2. Mostly use deduction.

On doctoral and master’s degree level:

  1. Data need to be grouped into a set of categories and transformed into abstract types of philosophies and knowledge.
  2. You should aim at generalisation of your findings.
  3. You must ensure that your findings are logical, accurate and authentic.
  4. Typologies can be used:
    1. To communicate arguments in your thesis or dissertation.
    1. To describe concepts relevant to your research.
    1. To enable a clear exchange and deeper understanding of the meanings of concepts and words.
    1. To serve as an ontology upon which the epistemology for your research can be developed.

Empirical generalisation means providing solutions to a research problem, answers to a research question or evidence to prove or disprove a hypothesis.

Evidence must be accurate, valid and credible.

Close

So, what do you think?

Is deconstruction just a euphemism for plagiarism?

Let’s put this question on ice for the time being.

The three articles following on this one deal with ethics.

Perhaps we will be in a better position to answer the question after we have taken a closer look at ethics and what it means.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 97: Research Methods for Ph. D. and Master’s Degree Studies: The Layout of the Thesis or Dissertation Part 1 of 9 Parts

Written by Dr. J.P. Nel

This article is an introduction to eight more articles on how to structure your thesis or dissertation.

I already pointed out in my initial articles that a thesis for master’s degree studies and a dissertation for doctoral studies are not the same.

Even so, there are enough similarities so that we can discuss them together.

Besides, it is a good idea to use the thesis that you write for your master’s degree as a learning opportunity for when you embark on doctoral studies.

And most universities will not object if you write and approach your thesis as you would a dissertation.

I will point out salient differences between a thesis and a dissertation.

Research without writing is of little purpose. There are, of course, other ways of communicating your research findings, most notably through oral presentation, but putting them on paper remains of paramount importance. The thesis or dissertation remains the major means by which you should communicate your findings.

It is something of a paradox, therefore, that many researchers are reluctant to commit their ideas to paper. Then again, not all people like writing and some might claim that it requires writing talent. For those who enjoy writing, this can be the most enjoyable part of the research process, because when compiling your research findings, you need to take what you wrote in the body of your document and create something new from it. Even though you have your research data to fall back on, you still need to think creatively. This takes some courage, hard work and lots of self-discipline.

It is always important to do immaculate and professional research. However, your biggest challenge is to develop an interesting and well-structured thesis or dissertation from the research data. Any research paper is based upon a four-step process. Firstly, you need to gather lots of general, though relevant, information. Secondly, you need to evaluate, analyse and condense the information into what is specifically relevant to a hypothesis, problem statement or problem question. Thirdly, you need to come to conclusions about the information that you analysed, and formulate findings based on your conclusions. Finally, your thesis or dissertation should again become more general as you try to apply your findings to the world in general or at least more widely than the target group for the research.

Different disciplines will use slightly different thesis or dissertation structures, so the structure described in the following nine articles is based on some basic principles. The steps given here are the building blocks of constructing a good thesis or dissertation.

A thesis or dissertation should clearly and thoroughly indicate what you have done to solve a problem that you identified. In addition, it should be factual, logical and readable. A good thesis or dissertation should be comprehensive and precise. Most importantly, though, it must be professionally researched.

Some of the contents of your research proposal will not have changed and should be included in your dissertation, as should some of the information that did change, but in the improved format or content. Your problem statement, question or hypothesis, for example, might have changed. The literature and other sources of information that you consulted will have changed and should include many more sources than the original list.

You should ensure that the time set aside for writing sessions is sufficient, as constant restarting and trying to find out where you left off when you last worked on the thesis wastes time and interferes with your thinking processes. If you are fully employed you should write after hours at least one hour per day, five days a week. Even then you will need to catch up by working over weekends, long weekends and holidays.

It is when writing a thesis or dissertation that you will really come to appreciate your desktop or laptop computer. When writing a thesis or dissertation, you should:

  1. Manage your time well.
  2. Make electronic backups of your work as often as possible.
  3. Plan each chapter in detail and structure your thesis or dissertation before you start writing. The layout of your thesis or dissertation may change over your period of study. Even so, good preparation is still important.
  4. First write your draft, then edit it critically and eliminate unnecessary material. Do not expect to get it right the first time around. Review is part of post graduate studies.
  5. Motivate the necessity of the study and explain the goal clearly.
  6. Give your study leader and anybody else who might read your thesis or dissertation a clear understanding of the research problem. The implications should be explained in such a way that everyone reading the thesis or dissertation has the same orientation towards the problem.
  7. Provide sufficient theoretical background to base the study on.
  8. Clearly describe the data collection methods and aids used.
  9. Provide sufficient and accurate data and indicate exactly how the data was used to solve the research problem.
  10. Conform to the university’s requirements for typing, printing and binding, and also meet the requirements set out formally in the learning institution’s post graduate policy and procedure.

We have come full circle from discussing the research process, all the concepts that you should apply and the tools that are available, to unpacking the research in the form of a thesis or dissertation. The following nine articles, therefore, return to the beginning of the research process and deal with the entire process, the only difference being that now we focus on putting the thesis or dissertation on paper.

Summary

The thesis or dissertation is the major means by which to communicate research findings.

Writing a thesis or dissertation requires creative thinking, some courage, hard work and lots of self-discipline.

You must find out in advance what the university’s requirements, rules, regulations and procedures for master’s or doctoral studies are and abide by them.

And you must manage time well.

Conducting research and writing a thesis or dissertation mostly consist of four main steps:

  1. Gather information.
  2. Evaluate, analyse and condense the information.
  3. Come to conclusions and findings.
  4. Apply your findings in practise.

The requirements for a thesis or dissertation are:

  1. It must clearly and thoroughly indicate what you have done to solve a problem.
  2. It must be comprehensive and precise.
  3. You must research the topic of your research professionally.
  4. In the case of doctoral studies your dissertation must align with your initial study proposal.
  5. You should continually make electronic backups of your work.
  6. You must plan and structure your thesis or dissertation before you start writing.
  7. You should review your work regularly.
  8. You should do enough literature study.
  9. You must clearly motivate the importance and value of your research.
  10. You must explain the research problem.
  11. You must clearly describe how you will collect and analyse data.
  12. You must show how you use the data that you collect in your thesis or dissertation.

Close

The eight articles following on this one are critically important for your further studies.

You can use them to guide your research process.

You can also use them to do a self-evaluation of your work before you submit the final manuscript for your thesis or dissertation.

Enjoy our studies.

Thank you.

Continue Reading

ARTICLE 96: Research Methods for Ph. D. and Master’s Degree Studies: Methods for Organising and Analysing Data: Part 2 of 2 Parts

Written by Dr. Hannes Nel

Research has shown that most people seek for excuses to fail rather than for ways in which to achieve success.

Nobody throws in the towel without rationalising about why their decision is justified.

And that is the difference between a winner and a loser.

Success always requires perseverance.

The most important decision that you must make before embarking on master’s degree or doctoral studies is that you will succeed.

Do not even think of failure as an option.

I discuss memoing and reflection on the analysis process in this article.

Memoing. Memos are an extremely versatile tool that can be used for many different purposes. It refers to any writing that you do in relation to the research other than your field notes, transcription or coding. A memo can be a brief comment that you type or write in the margin of your notes on an interview, notes on observations that you made during field work, your own impressions or ideas inspired by field work or literature study, an essay on your analysis of data, provisional conclusions and even possible findings. The basic idea behind memoing is to get ideas, observations and impressions down on paper to serve as the foundation for reflection, analytical insight and remembering spur of the moment ideas. Memos can also be coded in order to save them as part of the other data that you collected for further analysis.

Memos capture your thoughts on the main information that you recorded and can be most useful for creating new knowledge and findings. In dedicated computer software that uses it, memos are similar to codes, but usually contain longer passages of text. They, furthermore, differ from quotations in that quotations are extracts from primary documents, while memos represent your personal observations and impressions.

Although mostly recorded independently, a memo may refer to other memos, quotations, and codes. They can be grouped according to types (method, theoretical, descriptive, etc.), which is helpful in organizing and sorting them. Memos may also be assigned to primary documents so that they can be analysed with associated other coded data.

Memos are one of the most important techniques you have for recording and developing your ideas. You should, therefore, think of memos as a way of recording or presenting an understanding you have already reached. Memos should include reflections and conclusions on your reading and ideas as well as your fieldwork. They can be analytical, conceptual, theoretical or philosophical in nature. Memos can be written on almost anything that might have a positive impact on your research findings, including methodological issues, ethics, personal reactions, sudden understanding of previously complex concepts, misconceptions, etc. Memos should, therefore, be written in narrative format, including logical reasoning about the elements of your research. 

Writing memos by means of dedicated computer software is an important task in every phase of the qualitative research process. The ideas captured in memos are often the “pieces of the puzzle” that are later put together when you make conclusions and compile findings. Memos might be rather short in the beginning and become more elaborate as you gain more clarity on your arguments and the nature of the data or observations that you are investigating.

Memos can stand alone, in the event of which they would explain data that deals with a particular and important issue relevant to the purpose of the research. Memos can also be linked to other memos, quotations, or codes, in the event of which linked objects should refer to associated data and arguments to form a new, reconstructed or deconstructed narrative. Such associated memos, quotations and codes can contain methodological notes; they can be used as a bulletin board to exchange information between team members; they can be used to write notes about the analytical process, keeping a journal of to-dos; conclusions and findings can be deduced from them. Memos may also serve as a repository for symbols, text templates, and embedded objects (photos, figures, diagrams, graphs, etc.) that you may want to insert into primary documents or other memos.

The difference between memos and codes. A code can be just one word or a heading, forming a succinct, dense descriptor for a concept or argument emerging when you study data closely with the intent of identifying data elements relative to the purpose and topic of your research. Complex findings can be reduced to markers of important and relevant data.

A memo is normally longer than a code. A memo is a record of the process of cognitive thinking that you would go through when collecting data through observation, literature study, interviewing, etc. Words and short sections of a memo can be coded. Like codes, memos have short and concise names. These names, or titles, are used for displaying memos in browsers, and help to find specific memos.

The similarity and difference between memos and comments. The best way in which to compare memos and comments is probably to compare them with codes. Codes should be seen as “headings” for concepts. Memos and comments both refer to lengthy texts and both are generated by you as the researcher.

However, comments belong with just one entity or argument. You can, for example comment on a particular primary data source, such as a book, a report, minutes of a focus group meeting, etc. Memos, on the other hand, can be associated with more than one object or source of information. Memos, furthermore, can contribute to your collection of data in more than one way, for example as theoretical data, philosophical data, descriptions of methods, general comments, etc. Memos can be free-standing while comments must always be linked to other data. Memos can be associated with more than one object and be used for a variety of purposes, for example to discuss, analyse and process theoretical data, to describe methods, to comment, to inform, etc.

Reflection. The last step in data analysis is reflection. Reflection has to do with the ability to stand back from and think carefully about what you have done or are doing. The following questions will help you develop your ability to reflect on your analysis:

1.         What was your role in the research?

2.         Did you feel comfortable or uncomfortable? Why?

3.         What action did you take? How did you and others react?

4.         Was it appropriate? How could you have improved the situation for yourself, and others?

5.         What could you change in the future?

6.         Do you feel as if you have learnt anything new about yourself or your research?

7.         Has it changed your way of thinking in any way?

8.         What knowledge, from theories, practices and other aspects of your own and other’s research, can you apply to this situation?

9.         What broader issues – for example ethical, political or social – arise from this situation?

10.       Have you recorded your thoughts in your research diary?

Summary

Memos are versatile tools that you can use in the analysis of data. You can use memos to do the following:

  1. Integrate data in your thesis or dissertation.
  2. Consolidate your impressions and ideas into provisional conclusions and possible findings.
  3. To serve as the foundation for reflection, analytical insight and to remember spur of the moment ideas.
  4. To store interrelated ideas as codes.
  5. To capture your thoughts on the main information that you recorded.
  6. To develop new knowledge and findings.

Memos:

  1. May refer to other memos, quotations and codes.
  2. Can be grouped according to type.
  3. May be assigned to primary documents.
  4. Is a way of recording or presenting an understanding that you have already reached.
  5. Should include reflections and conclusions on your reading, ideas and fieldwork.
  6. Can be analytical, conceptual, theoretical or philosophical.
  7. Can be written on almost anything that might add value to your research.
  8. Should be written in a narrative format.
  9. May serve as a repository for symbols, text templates and embedded objects.

Memos are similar to codes, but usually contain longer passages of text.

Memos differ from comments in that comments belong with just one entity or argument, while memos can be associated with more than one object or source of information.

Also, memos can be free standing while comments must always be linked to other data.

The last step in data analysis is reflection.

Close

With this article we cross the bridge from data analysis to the layout of the thesis or dissertation.

Once you know how to structure a thesis or dissertation, you should be able to write and submit it.

There is one more step before you submit your thesis or dissertation for assessment, and that is to review your work.

The people who successfully completed a thesis or dissertation in the past are pretty much the same as you.

They are intelligent, creative and willing to work hard.

But they are not super human beings.

And there is no reason why you cannot achieve what they did.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 95: Research Methods for Ph. D. and Master’s Degree Studies: Methods for Organising and Analysing Data Part 1 of 2 Parts

Written by Dr. Hannes Nel

Data needs to be organised before it can be analysed.

Depending on whether a qualitative or quantitative approach is followed, the data needs to be arranged in a logical sequence or quantified.

This can be done by quantifying, sequencing, coding or memoing the data.

I discuss quantifying, sequencing and coding data in this article.

I will discuss memoing data in my second video on methods for organising and analysing data.

Quantifying data. Most data analysis today is conducted with computers, ranging from large, mainframe computers to small, personal laptops. Many computer programs are dedicated to analysing social science data, and it would be worth your while obtaining and learning to use such software if you need to write a thesis or dissertation, even if you do not exclusively use quantitative research methodology, because you might need to interpret some statistics or you might use some quantitative methods to enhance, support or corroborate your qualitative findings. However, you will probably need not much more than office software if you need to do largely qualitative research.

Almost all research software requires some form of coding. This can differ substantially from one software program to the next, so you will need to find out exactly how it works even before you purchase the software. Your study leader will probably know which software will be the most suitable for your research and give you advice on this. You will only quantify data if statistical analysis is necessary, so do not do this unless you know that you will need it in your thesis or dissertation.

Many people are intimidated by empirical research because they feel uncomfortable with mathematics and statistics. And indeed, many research reports are filled with unspecified computations. The role of statistics in research is quite important, but unless you write an assignment or thesis on statistics or mathematics you will not be assessed on your statistical or mathematical proficiency. That is why most universities offer statistical services. There are several private and public universities also offering such services, so use them. There is also nothing wrong with purchasing dedicated software to do your statistical analysis with, although it might be necessary to do a course on the software before you will be able to utilise it properly.

Sequencing the data. Many researchers are of the opinion that organising the data in a specific sequence offers the clearest available picture of the logic of causal analysis in research. This is called the elaboration model. Especially using contingency tables, this method portrays the logical process of scientific analysis.

When collecting material for interpretive analysis, you experience events, or the things people say in a linear, chronological order. When you then immerse yourself in field notes or transcripts, the material is again viewed in a linear sequence. This sequence can be broken down by inducing themes and coding concepts so that events or remarks that were far away from each other in a document, or perhaps even different documents, are now brought close together. This gives you a fresh view on the data and allows you to carefully compare sections of text that appear to belong together. At this stage, you are likely to find that there are all sorts of ways in which extracts that you grouped together under a single theme, differ, or that there are all kinds of sub-issues and themes that come to light.

Exploring themes more closely in this way is called elaboration. The purpose is to capture the finer nuances of meaning not captured by your original, possibly crude, coding system. This is also an opportunity to revise the coding system – either in small ways or drastically.  If you use software it might even be necessary to start your coding all over again. This can be extremely time-consuming, but at least every time you start over you end up with a much better structured research report.  

Coding. In most qualitative research, the original text is a set of field notes, data obtained through literature study, interviews, and focus groups. One of the first steps that you will need to take before studying and analysing data is to code the information. You can use cards for this, but dedicated computer software can save you time, effort and costs. Codes are typically short pieces of text referencing other pieces of text, graphical, audio, or video data. From a methodological standpoint, codes serve a variety of purposes. They capture meaning in the data. They also serve as tools for finding specific occurrences in the data that cannot be found by simple text-based search techniques. Codes also help you organise and structure the data that you collected.

Their main purpose is to classify many textual or other data units in such a manner that the data that belongs together can be grouped as such for easy analysis and structuring. One can, perhaps, think of coding as “indexing” your data. You can also see it as a way to mark keywords so that you can find, retrieve and group them more easily at a later stage. The length of a code should be restricted and should not be too long-winded.

Codes can also be used to classify data at different levels of abstraction, to group sets of related information units together for the purpose of comparison. This is what you would often use to consider and compare related arguments to make conclusions that can be the motivation for new knowledge. Dedicated computer software does not create new knowledge; it only helps you as the researcher to structure existing knowledge and experiences in such a manner that it will be easier for you to think creatively, that is to create new knowledge.

Formal coding will be necessary if you make use of dedicated research software. Even if you do not use research software you probably will need a method of coding to arrange your data according to the structure of your thesis or a dissertation. Your original data will probably include additional data, such as the time, date and place where the data was collected.

It is also a purpose of coding data to move to a higher conceptual level. The codes will inevitably represent the meanings that you infer from the original data, thereby moving closer towards the solution of your problem statement, or confirmation or rejection of your null hypothesis. By coding data, you will, of course, rearrange the data that you collected under different headings representing steps in the research process.

Five coding procedures are popularly used: open coding, in vivo coding, coding by list, quick coding and free coding.

With most qualitative research software, you can create codes first and then link them to sections in your data. Creating new codes is called open coding. The nature of the initial codes, which can be referred to as Level 1 codes or open codes, can vary and might change as you progress with your research. You should give a name for each new code that you open, and you can usually create one or more codes in a single step. These codes can stick closely to the original data, perhaps even reusing the exact words in the original data. Such codes can be deduced from research questions. In vivo coding is mostly used for this purpose. 

In vivo coding means creating a code for selected text as and when you come across text, or just a word in the text, that can and should serve as a code. This would normally be a word or short piece of text that would probably appear in other pieces of data that should be linked and grouped with the data in which you identified the code.

If you know where you are going with your study, you will probably create codes first (up front), then link them to sections of data. This would be coding by list. Coding by list allows you to select existing codes from a code list that you prepared in advance. You would typically select one or more codes associated with the current data selection.

You can also create codes as you work through your data, which would then be quick coding. In the case of quick coding, you will continue with the selected code that you are working with. This is an efficient method for the consecutive coding of segments using the most recently used code.

You can create codes that have not yet been used for coding or creating networks. Such codes are called free codes and they are a form of quick coding, although they can be prepared in advance. The reasons why you would create free codes can be:

  1. To prepare a stock of predefined codes in the framework of a given theory. This is especially useful in the context of teamwork when creating a base project.
  2. To code in a “top-down” (or deductive) way with all necessary concepts already at hand. This complements the “bottom-up” (or inductive) open coding stage, in which concepts emerge from the data.
  3. To create codes that come to mind during normal coding work and that cannot be applied to the current segment but will be useful later.

It will be easier to code data if you already have a good idea of what you are trying to achieve with your research. Sometimes the data will actually “steer” you towards codes that you did not even think of in the beginning. This is typical of a grounded theory approach, although you should always keep an open mind about your research, regardless of which approach you follow. Coding also helps to develop a schematic diagram of the structure of your thesis or dissertation. This can be based on your initial study proposal. A mindmap can, for example be used to structure your research process and to identify initial codes to start with.

A code may contain more than a single word but should be concise. There should be a comment area on your screen that you can use to write a definition for each code, if you need one. As you progress in doing the first level coding, you may start to understand how your data might relate to broader conceptual issues. Some of your field experiences may in fact be sufficiently similar so that you might be able to group different coded data together on a higher conceptual level. Your coding has then proceeded to a higher set of codes, referred to as Level 2 or category codes.

After a code has been created, it appears as a new entry in several locations (drop-down list, code manager). In this respect the following are important to remember:

  1. Groundedness: Groundedness refers to the number of quotations associated with the code. Large numbers indicate strong evidence already found for this code.
  2. Density: The number of codes connected to this code is indicated as the density. Large numbers can be interpreted as a high degree of theoretical density.
  3. Comment: The tilde character “~” can, as an example, be used to flag commented codes. It is not used for codes only but for all commented objects.

It is not only text that can be coded. You can also code graphic documents, audio and video material. There are many other ways in which codes can be utilised, for example they can be sorted, modified, renamed, deleted, merged and of course reported.

Axial coding. Axial coding is the process of putting data back together after it has been restructured by means of open coding. Open coding allows you to select data that belong together (under a certain code or sub-code) taken from a variety of sources containing the original or primary data. Categories of data are, thus, systematically developed and linked with subcategories. You can then develop a new narrative through a process of reconstruction. The new narrative might apply to a different context and should be articulated to the purpose of your research.

The articulation of selected data can typically relate to a condition, strategy or consequences. Data relating to a condition or strategy should address conditions that lead to the achievement of the purpose of the study. The purpose of the study will always be to solve a problem statement or question or to prove or disprove a null hypothesis. Consequential data include all outcomes of action or interaction.

Selective coding. Selective coding refers to the process of selecting a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development. Categories are, thus, integrated and refined. The core category would be the central phenomenon to which all the other categories are linked. To use a romantic example, in a novel you will identify the plot first, then the storyline, which you should analyse to identify the elements of the storyline that relate to the plot. From this you should be able to deduce lessons learned or a moral for the story.

Summary

Data is mostly organised by making use of dedicated computer programmes.

Most such computer programmes require some form of coding.

Data can be sequenced by following an elaboration model.

Contingency tables are mostly used to achieve logic in scientific analysis.

Data is often analysed in a linear, chronological order.

Codes are typically short pieces of text referencing other pieces of text, graphical, audio or video data.

Codes:

  1. Capture meaning.
  2. Serve as tools for finding specific occurrences in the data.
  3. Help you to organise and structure the data.
  4. Classifies textual or other data units in related groups and at different levels of abstraction.

Dedicated computer software does not create new knowledge.

Five coding procedures are popularly used.

They are open coding, in vivo coding, coding by list, quick coding and free coding.

Open coding means creating new codes.

In vivo coding means creating a code for elected text as and when you come across text, or just a word in text, that can and should serve as a code.

Coding by list is used when you know where you are going with your study so that you can create the codes even before collecting data.

Quick coding means creating codes as you work through your data.

Free codes are codes that have not been used yet. They can be the result of coding by list or quick coding.

To the five coding procedures should be added axial coding and selective coding.

Axial coding is the process of putting data back together after it has been restructured by means of open coding.

Selective coding refers to the process of electing a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development.

You should always keep an open mind about your research and the codes that you create.

Close

If what I discussed here sounds confusing and alien, then it is probably because of what we discussed under schema analysis in my previous video.

It is unlikely that the level of language used here is beyond you.

If that were the case, you would not have watched this video.

No doubt you will understand everything if you watch this video again after having tried out one or two of the computer programmes that deal with especially qualitative research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 94: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 7 of 7 Parts

Written by Dr. Hannes Nel

What, do you think, is the biggest challenge for somebody who embarks on doctoral or master’s degree studies?

Well, the answer to this question will probably be different for different people, depending on their circumstances, perceptions, value systems and culture.

If we were to combine all the possible challenges, we will probably arrive at “to understand”.

In my opinion that is the biggest challenge facing any post-graduate student.

Not only do you need to understand endless concepts, phenomena, theories and principles, you also must explain them in your thesis or dissertation.

And on doctoral level you will be required to define and explain new concepts, phenomena, theories and principles.

Data analysis is necessary for such elucidation.

I discuss the following data analysis methods in this article:

  1. Schema analysis.
  2. Situational analysis.
  3. Textual analysis.
  4. Thematic analysis.

Schema analysis

Schema analysis requires that you simplify cognitive processes to understand complex concepts and narrative information more readily. In this manner a narrative that might otherwise be difficult to understand because of the level of language used, cultural differences or any other reason, is made easier to understand for those who might find the language challenging or the cultural context alien.

Schema analysis might require additional explanation, interpretation and reconstruction of the message. An individual who grew up in the city might not know how to milk a cow and a farmer might not know how to obtain food from a street vending machine. 

Today schema analysis is also used in computer programming, where a schema is the organisation or structure for a database. A schema is developed by modelling data.  The purpose remains the same as when you would have done schema analysis manually – it is a process of rendering data more user-friendly.

Situational analysis

As opposed to comparative analysis, situational analysis focuses more on non-human elements. It implies the analysis of the broad context or environment in which an event takes place. It can include an analysis of the state and condition of people and the ecosystem, including the identification of trends; the identification of major issues related to people and ecosystems that require attention and an analysis of key stakeholders.

Textual analysis

Textual analysis, also called ‘content analysis’, is a data collection technique as well as a data analysis technique. It helps us to understand information on symbolic phenomena. It is used to investigate symbolic content such as words that appear in, for example, newspaper articles, comments on a blog, political speeches, etc. It is a qualitative technique in which the researcher attempts to describe the denotative meaning of content in an objective way.  

There are two levels of meaning, namely denotative and connotative meaning. The denotative meaning of a word refers to the literal meaning that you will find in a dictionary. This meaning is free from any form of interpretation. The connotative meaning of a word refers to the connotation that we ascribe to a particular word, based on the feeling or idea that the word invokes in us, which is often based on our prior experiences.

For example, the denotative meaning of the word ‘host’ is ‘one who lodges or entertains a stranger or guest at his or her house’. However, a woman who was abused by a host in whose guest house she stayed in her youth might conjure up in her mind a host as being a dangerous and sly human being who takes advantage of vulnerable people. The connotative meaning of ‘host’ is, therefore, largely the opposite of what the word is supposed to mean. In textual analysis we only work with the denotative meaning of words to make valid and reliable assumptions of the data within context.

You can only work with what was reported when doing qualitative research and you should not make any assumptions about the originator’s intended meaning. The context in which the information was used, however, also needs to be taken into consideration.

Textual analysis can be subjective because its interpretation is done by fallible people. It can include the analysis of freshly collected data as well as transcribed data. You should transcribe all the raw data that you collected from the written and verbal responses of participants during conversations, interviews, focus groups, meetings, etc. Electronically recorded interviews will need to be retyped word for word to facilitate textual analysis.

Thematic analysis

Also known as concept analysis or conceptual analysis, it is actually a coding regime, according to which data is reduced by means of identifying certain themes. Thematic analysis uses deductive coding by grouping concepts under one of a prepared list of themes.

In thematic analysis you first need to familiarise yourself with the data before you can even select themes. You should list the themes that you would like to cover in your research when you do your literature review. After having listed themes, the next step would be to generate codes. Codes serve as an important foundation for the structuring and arrangement of data by means of qualitative computer software. Even though one might not call it coding, capturing information on cards is also a form of coding, even though rather simple and limited in usability.

You can also search for themes now if you did not do so as a first step already. This is done by collating the codes that you identified into potential themes. Themes are actually “headings” under which related or linked codes are grouped, or clustered. Most qualitative research computer software allows you to review and edit your codes and themes when necessary, which will inevitably happen as you progress with your research.

Summary

Schema analysis:

  1. Requires that you simplify cognitive processes.
  2. Might require additional explanation, interpretation and reconstruction of selected data.
  3. Is also used in computer programming.

Situational analysis:

  1. Focuses on non-human elements.
  2. Analysis the broad context or environment for the research.
  3. Can include an analysis of the state and condition of people and the ecosystem.

Textual analysis

  1. Combines data collection and analysis.
  2. Helps to understand information on symbolic phenomena.
  3. Attempts to objectively describe the denotative meaning of content.
  4. Takes the context in which information was used into consideration.
  5. Can be subjective.
  6. Can include the analysis of freshly collected as well as transcribed data.

Thematic analysis

  1. Is a coding regime.
  2. Reduces data in terms of certain themes.
  3. Requires the identification of themes before coding can be done.

Close

That concludes my articles on data analysis and all the other concepts and theories behind doctoral and master’s degree studies.

In the remaining 14 articles I will focus more on the structure and layout of a thesis or dissertation.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 93: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 6 of 7 Parts

Written by Dr. Hannes Nel

In academic research we need to think inductively and deductively.

Inductive thinking is used to develop a new theory.

Therefore, it is what you would mostly use when writing a dissertation for a doctoral degree.

And you should use inductive thematic analysis to analyse the data that you collect.

Deductive thinking is used to test existing theory.

Therefore, it is what you would mostly use when writing a thesis for a master’s degree.

And you should use retrospective analysis to analyse the data that you collect.

Narrative analysis uses both inductive and deductive thinking more or less equally.

That is why both a dissertation and a thesis can be written in a narrative format.

I will discuss the nature of inductive thematic analysis, narrative analysis and retrospective analysis in this article.

Inductive thematic analysis (ITA)

Inductive thematic analysis draws on inductive analytic methods. It involves reading through textual data and identifying and coding emergent themes within the data.

ITA requires the generation of free-flow data. The most common data collection techniques associated with ITA are in-depth interviews and focus groups. You can also analyse notes from participant observation activities with ITA, but interview and focus group data are better. ITA is often used in qualitative inquiry, and non-numerical computer software, specifically designed for qualitative research, is often used to code and group data.

Paradigmatic approaches that fit well with ITA include post-structuralism, rationalism, symbolic interactionism, and transformative research.

Narrative analysis

The word “narrative” is generally associated with terms such as “tale”, or “story”. Such stories are mostly told in the first person, although somebody else might also tell the story about a different character, that is in the second or third person. First person will apply if an interview is held. Every person has his or her own story, and you can design your research project to collect and analyse the stories of participants, for example when you study the lived experiences of somebody who is a member of a gang on the Cape Flats.

There are different kinds of narrative research studies ranging from personal experiences to oral historical narratives. Therefore, narrative analysis refers to a variety of procedures for interpreting the narratives obtained through interviews, questionnaires by email or post, perhaps even focus groups. Narrative analysis includes formal and structural means of analysis. One can, for example, relate the information obtained from a gang member in terms of circumstances and reasons why he or she became a gang member, growth into gang activities, the consequences of criminal activities for his or her personal life, career, etc. One can also do a functional analysis looking at gang activities and customs (crime, gang fights, recruiting new members, punishment for transgression of gang rules, etc.)

In the analysis of narrative, you will track sequences, chronology, stories or processes in the data, keeping in mind that most narratives have a backwards and forwards nature that needs to be unravelled in the process of analysing the data.

Like many other data collection approaches, narrative analysis, also sometimes called ‘narrative inquiry’, is based on the study and textual representation of discourse, or the analysis of words. The type of discourse or text used in narrative analysis is, as the name indicates, narratives.

The sequence of events can be generated and recorded during the data collection process, such as through in-depth interviews or focus groups; they can be incidentally captured during participant observation; or, they can be embedded in written forms, including diaries, letters, the internet, or literary works. Narratives are analysed in numerous ways and narrative analysis can be used in research within a substantial variety of social sciences and academic fields, such as sociology, management, labour relations, literature, psychology, etc.

Narrative analysis can be used for a wide range of purposes. Some of the more common usages include formative research for a subsequent study, comparative analysis between groups, understanding social or historical phenomena, or diagnosing psychological or medical conditions. The underlying principle of a narrative inquiry is that narratives are the source of data used, and their analysis opens a gateway to better understanding of a given research topic.

In most narratives meaning is conveyed at different levels, for example informational content level that is suitable for content analysis; textual level that is suitable for hermeneutic or discourse analysis, etc.

Narrative analysis has its own methodology. In narrative analysis you will analyse data in search of narrative strings (present commonalities running through and across texts), narrative threads (major emerging themes) and temporal/spatial themes (past, present and future contexts).

Retrospective analysis

Retrospective analysis is sometimes also called ‘retrospective studies’ or ‘trend analysis’ or ‘trend studies’. Retrospective analysis usually looks back in time to determine what kind of changes have taken place. For example, if you were to trace the development of computers over the past three decades, you would see some remarkable changes and improvements.

Retrospective analysis focuses on changes in the environment rather than in people, although changes in the fashions, cultures, habits, values, jobs, etc. are also often analysed. Each stage in a chronological development is represented by a sample and each sample is compared with the others against certain criteria.

Retrospective analysis examines recorded data to establish patterns of change that have already occurred in the hope of predicting what will probably happen in the future. Predicting the future, however, is not simple and often not accurate. The reason for this is that, as the environment changes, so do the variables that determine or govern the change. It, therefore, stands to reason that, the longer ahead one tries to predict the future, the more inaccurate will your predictions probably be.

Retrospective analysis does not include the same respondents over time, so the possibility exists for variation in data due to the different respondents rather than the change in trends.

Summary

Inductive thematic analysis, or ITA:

  1. Draws on inductive analytical methods.
  2. Involves reading textual data.
  3. Identifies and codes emergent themes within the data.
  4. Requires the generation of free-flow data.
  5. Favours in-depth interviews and focus groups.
  6. Can also use participant observation.
  7. Fits well with qualitative research and critical or interpretive paradigms.

Narrative analysis:

  1. Tells stories related by people.
  2. Ranges from personal experiences to historical narratives.
  3. Can use a wide range of data collection methods.
  4. Includes formal, structural and functional analysis.
  5. Tracks sequences, chronology, stories or processes in data.
  6. Is based on the textual representation of discourse, or the analysis of words.
  7. Is used by a substantial variety of social sciences.
  8. Can be used for a wide range of purposes.
  9. Conveys meaning on different levels.
  10. Has its own methodology.

Retrospective analysis:

  1. Looks back in time to identify change.
  2. Focuses on change in the environment.
  3. Represents and compares change in samples.
  4. Sometimes tries to predict the future.
  5. Does not include the same respondents over time.

Close 

It is a good idea to mention and explain how you analysed the data that you collected in your thesis or dissertation.

Ph. D. students will already do so in their research proposal.

That is why you need to know which data analysis methods are available and what they mean.

It will also help to ensure that you use the data that you collect efficiently and effectively to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 92: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Ethnographic analysis

Written by Dr. Hannes Nel

I wonder if ethnographic research was ever as vitally important as now.

The COVID-19 pandemic has dramatically changed the way people live, interact, socialise and survive.

No doubt, research on how to combat the virus is still the priority.

However, while numerous researchers are frantically working on finding an effective and safe vaccine, life goes on.

And it will take long before everybody is vaccinated anyway.

And we need to determine what the impact of unemployment, financial difficulties, famine, crime and the loss of loved ones on our psychological health is.

And we need to find ways in which to cope with the new reality.

I discuss ethnographic analysis in this article.

Ethnographic analysis typically addresses the issue of ‘what is going on’ between the participants in some segment (or segments) of the data, in great analytical depth and detail. Ethnographic studies aim to provide contextual, interpretive accounts of their participants’ social worlds.

Ethnographic analysis is rarely systematic or comprehensive: rather, it is selective and limited in scope. Its main advantage is to permit a detailed, partially interpretive, account of mundane features of the social world. This account may be limited to processes within the focus group itself, or (more typically) it may take the focus group discussion as offering a ‘window’ on participants’ lives.

Ethnographic analysis aims to ground interpretation in the particularities of the situation under study, and in ‘participants’ (rather than ‘analysts’) perspectives. Data are generally presented as accounts of social phenomena or social practices, substantiated by illustrative quotations from the focus group discussion. Key issues in ethnographic analysis are:

•           how to select the material to present,

•           how to give due weight to the specific context within which the material was generated, while retaining some sense of the group discussion as a whole, and

•           how best to prioritise participants’ orientation in presenting an interpretive account.

Researchers using ethnographic research, such as observing people in their natural settings, often ask the question what role the researcher should adopt when conducting research: an overt and announced role or a covert and secret role? The most common roles that you as the researcher may play are complete participation, participation as an observer, observer as a participant and complete observer.

The complete participant seeks to engage fully in the activities of the group or organisation being researched. Thus, this role requires you to enter the setting covertly so that the participants will not be aware of your presence or at least not aware that you are doing research on them. By doing research covertly you are supposed to be able to gather more accurate information than if participants were aware of what you are doing – they should act more naturally than otherwise. The benefit of the covert approach is that you should gain better understanding of the interactions and meanings that are held important to those regularly involved in the group setting. Covert research can, however, expose you to the risk that your efforts might prove unsuccessful, especially if the participants find out that you were doing research on them without them being informed and without their agreement. Such research can also lead to damage to the participants in many ways, for example by embarrassing them, damaging their career prospects, damaging their personal relationships, etc.

You will act ethically and more safely if you, as the researcher observe a group or individual and participate in their activities. In this case you formally make your presence and intentions known to the group being studied and you ask for their permission. This may involve a general announcement that you will be conducting research, or a specific introduction as the researcher when meeting the various people who will form part of the target group for the research.

This approach requires of you to develop sufficient rapport with the participants to gain their support and co-operation. You will need to explain to them why the research is important and how they will benefit from it. The possibility exists that you may become emotionally involved in the activities and challenges of the target group, which might have a negative effect on your ability to interpret information objectively.

The researcher as observer only is, as we already discussed, an etic approach. Here you will distance yourself from the idea of participation but still do your research openly and in agreement with the target group. Such transparent research often involves visiting just one site or a setting that is offered only once. It will probably be necessary to do relatively formal observation. The risk exists that you may fail to adequately appreciate certain informal norms, roles, or relationships and that the group might not trust you and your intentions, which is why the period of observation should not be too long.

The complete and unannounced observer tends to be a covert role. In this case, you typically remain in the setting for a short period of time but are a passive observer to the flow of activities and interactions.

Summary

Ethnographic analysis:

  1. Analyses events and phenomena in a social context.
  2. Is selective and limited in scope.
  3. Delivers a detailed interpretation of commonplace features of the social world.
  4. Focuses on specific aspects of the target group’s lives.

Key issues of ethnographic analysis are:

  1. How data to analyse is selected.
  2. The context on which the collection and analysis focuses.
  3. Interpretation and description of the findings by focusing on the target group’s orientation.

Observation is often used for the collection of data.

An emic or etic approach can be followed.

An etic approach is often also executed covertly.

Covert collection of data can promote accuracy because the target group for the research will probably behave naturally if they do not know that they are being observed.

A covert approach can be rendered inadvisable because of ethical considerations.

An overt approach requires gaining the trust of the target group for the research.

Close

You probably noticed that it is near impossible to discuss data collection and data analysis separately.

Besides, ethnography is a research method, and ethnographic data collection and analysis are part of the method.

Natural scientists will probably only use it to trace the ontology of scientific concepts or phenomena.

And then the data will be historical in nature.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 91: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Elementary Analysis

Written by Dr. Hannes Nel

Most social qualitative research requires the analysis of several variables simultaneously (called “multivariate analysis”), for example the analysis of the simultaneous association of age, education, and gender would be an example of multivariate analysis. Specific techniques for conducting a multivariate analysis include factor analysis, multiple correlation, regression analysis, and path analysis. All techniques are based on the preparation and interpretation of comparative tables and graphs, so you should practise doing this if you do not already know how.

These are largely quantitative techniques. Fortunately, the statistical calculations are done for you by the computer, so just be aware of the definitions.

Factor analysis. Factor analysis is a statistical procedure used to uncover relationships among many variables. This allows numerous inter-correlated variables to be condensed into fewer dimensions, called factors. It is possible, for example, that variations in three or four observed variables mainly reflect the variations in a single unobserved variable, or in a reduced number of unobserved variables. Clearly this type of analysis is mostly numerical in nature. Factors are analysed inductively to determine trends, relationships, correlations, causes of phenomena, etc. Factor analysis searches for variations in response to variables that are difficult to observe and that are suspected to have an influence on events or phenomena.

Multiple correlation. Multiple correlation is a statistical technique that predicts values of one variable based on two or more other variables. For example, what will happen to the incidence of HIV AIDS (variable that we are doing research on) in a particular area if unemployment increases (variable 1), famine breaks out (variable 2) and the incidence of TB (variable 3) increases? 

Multiple correlation is a linear relationship among more than two variables. It is measured by the coefficient of multiple determination, which is a measure of the fit of a linear regression. A linear regression falls somewhere between zero and one (assuming a constant term has been included in the regression); a higher value indicates a stronger relationship between the variables, with a value of one indicating a perfect relationship and a value of zero indicating no relationship at all between the independent variables collectively and the dependent variable.

Path analysis. Path analysis can be a statistical method of finding cause/effect relationships, a method for finding the trail that leads users to websites or an operations research technique. We also have “critical path analysis” which is mostly used in project management and is a method by means of which activities in a project are planned to be executed in a logical sequence of events to ensure that the project is completed in an efficient and effective manner. We are concerned about path analysis as an operations research technique here.

Path analysis is a method of decomposing correlations into different pieces of interpretation of effects (e.g. how does parental education influence children’s income when they are adults?). Path analysis is closely related to multiple regression; you might say that regression is a special case of path analysis. It is a “causal model” because it allows us to test theoretical propositions about cause and effect without manipulating variables.

Regression analysis. Regression analysis can be used to determine which factors influence events, phenomena, or relationships.

Regression analysis includes a variety of techniques for modelling and analysing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. If, for example, you wish to determine the effect of tax, legislation and education on levels of employment, levels of employment will be the dependent variable while tax, legislation and education will be the independent variables. More specifically, regression analysis helps one understand how to maintain control over a dependent variable. In the level of employment example, you might wish to know what should be done in terms of tax, legislation and education to improve employment or at least to maintain a healthy level of employment. In this example it is of interest to characterise the variation of the dependent variable around the regression function, which can be described by a probability distribution (how much the level of employment would change and in what direction if all, some or one of the independent variables change by a particular value).

Regression analysis typically estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are held fixed. Seen from this perspective, the example of employment levels would mean investigating what would happen if tax, legislation and education remain unchanged.

Regression analysis is widely used for prediction and forecasting, although this should be done with circumspection. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, to explore the forms of these relationships. Regression analysis presupposes causal relationships between the independent and dependent variables, although investigation can also show that such relations do not exist. An example of using regression analysis, also called “multiple regression” is to determine which factors from colour, paper type, number of advertisements and content (independent variables) have the biggest effect on the number of magazines sold (dependent variable).

Summary

Multivariate analysis can be used for the analysis of several variables simultaneously.

Techniques that can be used for conducting multivariate analysis include factor analysis, multiple correlation, path analysis and regression analysis.

Factor analysis is used to uncover relationships among many variables.

Factors are analysed inductively to determine trends, relationships, correlations, cause of phenomena, etc.

Multiple correlation predicts values of one variable based on two or more other variables.

Multiple correlation is a linear relationship among more than two variables.

Path analysis seeks cause/effect relationships.

It can also be used to find data or to manage projects.

Regression analysis can be used to determine which factors influence events, phenomena or relationships.

It includes a variety of techniques for modelling and analysing several variables when the focus is on the relationship between a dependent variable and one or more independent variables.

Regression analysis helps us to understand how to maintain control over a dependent variable.

Close

Statistics are a wonderfully flexible way in which to analyse data.

Dedicated computer software can do the calculations for us and show us the numbers in tabular and graphic format.

All we need to do, is to analyse the numbers or graphs.

It is mostly quite easy to interpret visual material.

And you will impress your study leader, lecturer and other stakeholders in your research if you use such analysis techniques.

Most importantly, it will be so much easier and faster to come to conclusions and to derive valid and accurate findings from your conclusions.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 90: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 3 of 7 Parts

Written by Dr. Hannes Nel

I discuss conversation and discourse analysis as data collection methods in this article.

Conversation and discourse analysis

Both conversation and discourse analysis approaches stem from the ethnomethodological tradition, which is the study of the ways in which people produce recognisable social orders and processes. Both of these approaches tend to examine text as an “object of analysis”. Discourse analysis is a rather comprehensive process of evaluating the structures of conversations, negotiations and other forms of discourse as well as how people interact when communicating with one another. The sharing of meaning through discourse always takes place in a particular context so the social construction of such discourse can also be analysed.

Conversation and discourse analysis both study “naturally” occurring language, as opposed to text resulting from more “artificial” contexts, such as formal interviews. The purpose is to identify social and cultural meanings and phenomena from the discourse studied, which is why the process is suitable for almost any culture-related research.

The name “discourse” shows that it is language that is analysed while language is also used to do research. It can be a complex process and is often better suited to those more interested in theorising about life than those who want to research actual life events.

Discourse analysis focuses on the meaning of the spoken and written word, and the reasons why it is the way it is. Discourse refers to expressing oneself using words and to the variety and flexibility of language in the way language is used in ordinary interaction.

When doing research, we often look for answers in places or sources that we can easily reach when the real answers might lie somewhere else. Discourse analysis is one method which allows us to move beyond the obvious to the less obvious, although much more relevant sources of data.

Discourse analysis analyses what people say apart from just picturing facts. Discourses are ever-present ways of knowing, valuing and experiencing the world. Different people have different discourses. Gangs on the Cape Flats, for example, use words and sentences that the ordinary man on the street will find difficult to understand. Discourses are used in everyday texts for building power and knowledge, for regulation and normalisation, for the development of new knowledge and power relations.

As a language-based analytical process, discourse analysis is concerned with studying and analysing written texts and spoken words to reveal any possible relationships between language and social interaction. Language is analysed as a possible source of power, dominance, inequality and bias. Processes that may be the subject of research include how language is initiated, maintained, reproduced and transformed within specific social, economic, political and historical contexts. A wide variety of relationships and context can be investigated and analysed, including ways in which the dominant forces in society construct versions of reality that favour their interests, and to uncover the ideological assumptions that are hidden in the words of our written text or oral speech in order to resist, overcome or even capitalise on various forms of power. Criminals in a correctional facility will, for example, be included or excluded from gangs on account of certain ways of speech and codes that only they know.

Discourse analysis collects, transcribes and analyses ordinary talk and everyday explanations for social actions and interaction. It emphasizes the use of language as a way to construct social reality. Yin[1] defines discourse analysis as follows:

“Discourse analysis focuses on explicit theory formation and analysis of the relationships between the structures of text, talk, language use, verbal interaction or communication, on the one hand, and societal, political, or cultural micro- and macro-structures and cognitive social representations, on the other hand.”

Discourse analysis examines a discourse by looking at patterns of the language used in a communication exchange as well as the social and cultural contexts in which these communications occur. It can include counting terms, words, and themes. The relationship between a given communication exchange and its social context requires an appreciation and understanding of culturally specific ways of speaking and writing and ways of organising thoughts.

Oral communication always fits into a context which lends meaning to it. It always has a double structure, namely the propositional context (ontology) and the performatory content (epistemological meaning). Oral communication can, for example, be used with good effect to understand human behaviour, thought processes and points of view. 

The result of discourse analysis is a form of psychological natural history of the phenomena in which you are interested. To be of value for research purposes oral communication must be legitimate, true, justified, sincere and understandable. It should also be coherent in organisation and content and enable people to construct meaning in social context. Participants in oral communication should do so voluntarily and enjoy equal opportunity to speak.

Discourse analysis is a form of critical theory. You, as the researcher, need to ensure that the discourse and the participants in the discussion meet the requirements for such interaction. It will also be your duty to eliminate or at least reduce any forces or interventions that may disrupt the communication. Such discourse can also be taken further by having other participants in the research process elaborate and further analyse the results of initial communications. For this purpose, you need to be highly sensitive to the nuance of language.

Any qualitative research allows you to make use of coding and structuring of data by means of dedicated research software, such as ATLAS.ti or CAQDAS. This will enable you to discover patterns and broad areas of salient argumentation, intentions, functions, and consequences of the discourse. By seeking alternative explanations and the degree of variability in the discourse, it is possible to rule out rival interpretations and arrive at a fair and accurate comprehension of what took place and what it meant. 

Discourse analysis can also be used to analyse and interpret written communication on condition that the written communication is a written version of communication relevant to the topic being researched. This requires a careful reading and interpretation of textual material.

Discourse analysis has been criticized for its lack of system, its emphasis on the linguistic construction of a social reality, and the impact of the analysis in shifting attention away from what is being analysed and towards the analysis itself. Discourse is in actual fact a text in itself, with the result that it can also be analysed for meaning and inferences, which might lead to the original meaning of oral communication being eroded at the expense of accuracy, authenticity, validity and relevance. 

Conversation analysis is arguably the most immediate and most frequently used form of discourse analysis in the sense that it includes any face-to-face social interaction. Social interaction inevitably includes contact with other people and contact with other people mostly includes communication. People construct meaning through speech and text, and its object of analysis typically goes beyond individual sentences. Data on conversations can be collected through direct communication, which needs to be recorded by taking notes, making a video or electronic recording.

Conversation analysis is the study of talk in interaction and generally attempts to describe the orderliness, structure and sequential patterns of interaction, whether this is universal or a casual conversation. Conversation analysis is a way of analysing data and has its own methodological features. It studies the social organisation of two-way conversation through a detailed inspection of voice recordings and transcriptions made from such recordings, and relies much more on the patterns, structures and language used in speech and the written word than other forms of data analysis.

Conversation analysis assumes that it is fundamentally through interaction that participants build social context. The notion of talk as action is central to its framework. Within a focus group we can see how people tell stories, joke, agree, debate, argue, challenge or attempt to persuade. We can see how they present particular ‘versions’ of themselves and others for particular interactional purposes, for example to impress, flatter, tease, ridicule, complain, criticise or condone.

Participants build the context of their talk in and through the talk while talking. The talk itself, in its interactional context, provides the primary data for analysis. Further, it is possible to harness analytical resources intrinsic to the data: by focusing on participants’ own understanding of the interaction as displayed directly in their talk, through the conversational practices they use. In this way, a conversation analytic approach prioritises the participants’ (rather than the analysts’) analysis of the interaction.

Naturally occurring data, i.e. data produced independent of the researcher, encompass a range of universal contexts (for example classrooms, courtrooms, doctors’ surgeries, etc.), in which talk has been shown both to follow the conversations of ‘every-day’ conversation and systematically to depart from these.

Conversation analysis tends to be more granular than classical discourse analysis, looking at elements such as grammatical structures and concentrating on smaller units of text, such as phrases and sentences. An example of conversation analysis is where a researcher “eavesdrops” on the way in which different convicted criminals talk to other inmates to find a pattern in their cognitive thinking processes.

While conversation and discourse analysis are similar in several ways, there are some key differences. Discourse analysis is generally broader in what it studies, utilising pretty much any naturally occurring text, including written texts, lectures, documents, etc. An example of discourse analysis would be if a researcher were to go through transcripts or listen in on group discussions between convicted serial murderers to examine their patterns of reasoning.

The implications of discourse and conversation analysis for data collection and sampling are twofold. The first pertains to sample sizes and the amount of time and effort that goes into text analysis at such a fine level of detail, relative to thematic analysis. In a standard thematic analysis, the item of analysis may be a few sentences of text, and the analytic action would be to identify themes within that text segment. In contrast, linguistic-oriented approaches, such as conversation and discourse analysis, require intricate dissection of words, phrases, sentences and interaction among speakers. In some cases, tonal inflection is included in the analysis. Linguistic analysis, be it transcripts of conversations, interviews or any other form of communication, often consists of an abundance of material to analyse, which requires detailed analysis. This requires substantial time and effort, with the result that not too many samples can be processed in a reasonable time.

The data source inevitably determines the type and volume of analysis that can be done. Both discourse analysis and conversation analysis are interested in naturally occurring language. In-depth interviews and focus groups can be used to collect data, although they are not ideal if it is important to analyse social communication. Analysis of such data often requires reading and rereading material to identify key themes and other wanted information which would lead to meanings relevant to the purpose of the research. 

Existing documents, for example written statements made by convicted criminals, are excellent sources of data for discourse analysis as well as conversation analysis. In terms of field research, participant observation is ideal for capturing “naturally occurring” discourse. Minutes of meetings, written statements, transcripts of discussions, etc. can be used for this purpose. During participant observation, one can also record naturally occurring conversations between two or more people belonging to the target population for the study, for example two surviving victims of attacks by serial killers, two security guards who had experiences with attempted serial killings, etc. In many cases legal implications might make listening in to conversations difficult to do without running the risk of encountering legal problems.

Text can be any documentation, including personal reflections, books, official documents and many more. In action research this is enhanced with personal experiences, which can also be put on paper so that they often become historical data. In action research the research is given a more relevant cultural “flavour” by engaging participants from the community directly in the data collection and analysis. The emphasis is on open relationships with participants so that they have a direct say in how data is collected and interpreted. If participants decide that technical procedures such as sampling or skilled tasks such as interviewing should be part of the data collection and analysis process, they could draw on expert advice and training supplied by researchers.

Paradigmatic approaches that fit well with discourse and conversation analysis include constructivism, hermeneutics, interpretivism, critical theory, post-structuralism and ethnomethodology.

Summary

Discourse analysis:

  1. Evaluates the structures of conversations, negotiations and other forms of communication.
  2. Is dependent on context.
  3. Analyses and uses language.
  4. Focuses on the meaning of the spoken and written word.
  5. Allows the researcher to move from the obvious to the less obvious.
  6. Is concerned with studying and analysing written texts and spoken words to reveal the relationships between language and social interaction.
  7. Examines a discourse by looking at patterns of the language used.
  8. Delivers a form of psychological natural history of the phenomena being investigated.
  9. Is a form of critical theory.
  10. Is criticised for its lack of system, emphasis on the linguistic construction of social reality and the lack of focus on the research problem.

Conversation analysis:

  1. Is a form of discourse analysis.
  2. Includes face-to-face social interaction.
  3. Attempts to describe the orderliness, structure and sequential patterns of interaction.
  4. Has its own methodological features.
  5. Assumes that it is fundamentally through interaction that participants build social context.

Discourse and conversation analysis:

  1. Stem from the ethnomethodological tradition.
  2. Examine text as the object of analysis.
  3. Study naturally occurring language.
  4. Identify social and cultural meanings and phenomena.
  5. Require intricate dissection of words, phrases, sentences and interaction between people.

Close

The differences between discourse and conversation analysis are subtle.

Discourse analysis is broader than conversation analysis in the range of its analysis.

While conversation analysis tends to go into finer detail than discourse analysis.

Enjoy your studies.

Thank you.


[1] 2016: 69.

Continue Reading