ARTICLE 95: Research Methods for Ph. D. and Master’s Degree Studies: Methods for Organising and Analysing Data Part 1 of 2 Parts

Written by Dr. Hannes Nel

Data needs to be organised before it can be analysed.

Depending on whether a qualitative or quantitative approach is followed, the data needs to be arranged in a logical sequence or quantified.

This can be done by quantifying, sequencing, coding or memoing the data.

I discuss quantifying, sequencing and coding data in this article.

I will discuss memoing data in my second video on methods for organising and analysing data.

Quantifying data. Most data analysis today is conducted with computers, ranging from large, mainframe computers to small, personal laptops. Many computer programs are dedicated to analysing social science data, and it would be worth your while obtaining and learning to use such software if you need to write a thesis or dissertation, even if you do not exclusively use quantitative research methodology, because you might need to interpret some statistics or you might use some quantitative methods to enhance, support or corroborate your qualitative findings. However, you will probably need not much more than office software if you need to do largely qualitative research.

Almost all research software requires some form of coding. This can differ substantially from one software program to the next, so you will need to find out exactly how it works even before you purchase the software. Your study leader will probably know which software will be the most suitable for your research and give you advice on this. You will only quantify data if statistical analysis is necessary, so do not do this unless you know that you will need it in your thesis or dissertation.

Many people are intimidated by empirical research because they feel uncomfortable with mathematics and statistics. And indeed, many research reports are filled with unspecified computations. The role of statistics in research is quite important, but unless you write an assignment or thesis on statistics or mathematics you will not be assessed on your statistical or mathematical proficiency. That is why most universities offer statistical services. There are several private and public universities also offering such services, so use them. There is also nothing wrong with purchasing dedicated software to do your statistical analysis with, although it might be necessary to do a course on the software before you will be able to utilise it properly.

Sequencing the data. Many researchers are of the opinion that organising the data in a specific sequence offers the clearest available picture of the logic of causal analysis in research. This is called the elaboration model. Especially using contingency tables, this method portrays the logical process of scientific analysis.

When collecting material for interpretive analysis, you experience events, or the things people say in a linear, chronological order. When you then immerse yourself in field notes or transcripts, the material is again viewed in a linear sequence. This sequence can be broken down by inducing themes and coding concepts so that events or remarks that were far away from each other in a document, or perhaps even different documents, are now brought close together. This gives you a fresh view on the data and allows you to carefully compare sections of text that appear to belong together. At this stage, you are likely to find that there are all sorts of ways in which extracts that you grouped together under a single theme, differ, or that there are all kinds of sub-issues and themes that come to light.

Exploring themes more closely in this way is called elaboration. The purpose is to capture the finer nuances of meaning not captured by your original, possibly crude, coding system. This is also an opportunity to revise the coding system – either in small ways or drastically.  If you use software it might even be necessary to start your coding all over again. This can be extremely time-consuming, but at least every time you start over you end up with a much better structured research report.  

Coding. In most qualitative research, the original text is a set of field notes, data obtained through literature study, interviews, and focus groups. One of the first steps that you will need to take before studying and analysing data is to code the information. You can use cards for this, but dedicated computer software can save you time, effort and costs. Codes are typically short pieces of text referencing other pieces of text, graphical, audio, or video data. From a methodological standpoint, codes serve a variety of purposes. They capture meaning in the data. They also serve as tools for finding specific occurrences in the data that cannot be found by simple text-based search techniques. Codes also help you organise and structure the data that you collected.

Their main purpose is to classify many textual or other data units in such a manner that the data that belongs together can be grouped as such for easy analysis and structuring. One can, perhaps, think of coding as “indexing” your data. You can also see it as a way to mark keywords so that you can find, retrieve and group them more easily at a later stage. The length of a code should be restricted and should not be too long-winded.

Codes can also be used to classify data at different levels of abstraction, to group sets of related information units together for the purpose of comparison. This is what you would often use to consider and compare related arguments to make conclusions that can be the motivation for new knowledge. Dedicated computer software does not create new knowledge; it only helps you as the researcher to structure existing knowledge and experiences in such a manner that it will be easier for you to think creatively, that is to create new knowledge.

Formal coding will be necessary if you make use of dedicated research software. Even if you do not use research software you probably will need a method of coding to arrange your data according to the structure of your thesis or a dissertation. Your original data will probably include additional data, such as the time, date and place where the data was collected.

It is also a purpose of coding data to move to a higher conceptual level. The codes will inevitably represent the meanings that you infer from the original data, thereby moving closer towards the solution of your problem statement, or confirmation or rejection of your null hypothesis. By coding data, you will, of course, rearrange the data that you collected under different headings representing steps in the research process.

Five coding procedures are popularly used: open coding, in vivo coding, coding by list, quick coding and free coding.

With most qualitative research software, you can create codes first and then link them to sections in your data. Creating new codes is called open coding. The nature of the initial codes, which can be referred to as Level 1 codes or open codes, can vary and might change as you progress with your research. You should give a name for each new code that you open, and you can usually create one or more codes in a single step. These codes can stick closely to the original data, perhaps even reusing the exact words in the original data. Such codes can be deduced from research questions. In vivo coding is mostly used for this purpose. 

In vivo coding means creating a code for selected text as and when you come across text, or just a word in the text, that can and should serve as a code. This would normally be a word or short piece of text that would probably appear in other pieces of data that should be linked and grouped with the data in which you identified the code.

If you know where you are going with your study, you will probably create codes first (up front), then link them to sections of data. This would be coding by list. Coding by list allows you to select existing codes from a code list that you prepared in advance. You would typically select one or more codes associated with the current data selection.

You can also create codes as you work through your data, which would then be quick coding. In the case of quick coding, you will continue with the selected code that you are working with. This is an efficient method for the consecutive coding of segments using the most recently used code.

You can create codes that have not yet been used for coding or creating networks. Such codes are called free codes and they are a form of quick coding, although they can be prepared in advance. The reasons why you would create free codes can be:

  1. To prepare a stock of predefined codes in the framework of a given theory. This is especially useful in the context of teamwork when creating a base project.
  2. To code in a “top-down” (or deductive) way with all necessary concepts already at hand. This complements the “bottom-up” (or inductive) open coding stage, in which concepts emerge from the data.
  3. To create codes that come to mind during normal coding work and that cannot be applied to the current segment but will be useful later.

It will be easier to code data if you already have a good idea of what you are trying to achieve with your research. Sometimes the data will actually “steer” you towards codes that you did not even think of in the beginning. This is typical of a grounded theory approach, although you should always keep an open mind about your research, regardless of which approach you follow. Coding also helps to develop a schematic diagram of the structure of your thesis or dissertation. This can be based on your initial study proposal. A mindmap can, for example be used to structure your research process and to identify initial codes to start with.

A code may contain more than a single word but should be concise. There should be a comment area on your screen that you can use to write a definition for each code, if you need one. As you progress in doing the first level coding, you may start to understand how your data might relate to broader conceptual issues. Some of your field experiences may in fact be sufficiently similar so that you might be able to group different coded data together on a higher conceptual level. Your coding has then proceeded to a higher set of codes, referred to as Level 2 or category codes.

After a code has been created, it appears as a new entry in several locations (drop-down list, code manager). In this respect the following are important to remember:

  1. Groundedness: Groundedness refers to the number of quotations associated with the code. Large numbers indicate strong evidence already found for this code.
  2. Density: The number of codes connected to this code is indicated as the density. Large numbers can be interpreted as a high degree of theoretical density.
  3. Comment: The tilde character “~” can, as an example, be used to flag commented codes. It is not used for codes only but for all commented objects.

It is not only text that can be coded. You can also code graphic documents, audio and video material. There are many other ways in which codes can be utilised, for example they can be sorted, modified, renamed, deleted, merged and of course reported.

Axial coding. Axial coding is the process of putting data back together after it has been restructured by means of open coding. Open coding allows you to select data that belong together (under a certain code or sub-code) taken from a variety of sources containing the original or primary data. Categories of data are, thus, systematically developed and linked with subcategories. You can then develop a new narrative through a process of reconstruction. The new narrative might apply to a different context and should be articulated to the purpose of your research.

The articulation of selected data can typically relate to a condition, strategy or consequences. Data relating to a condition or strategy should address conditions that lead to the achievement of the purpose of the study. The purpose of the study will always be to solve a problem statement or question or to prove or disprove a null hypothesis. Consequential data include all outcomes of action or interaction.

Selective coding. Selective coding refers to the process of selecting a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development. Categories are, thus, integrated and refined. The core category would be the central phenomenon to which all the other categories are linked. To use a romantic example, in a novel you will identify the plot first, then the storyline, which you should analyse to identify the elements of the storyline that relate to the plot. From this you should be able to deduce lessons learned or a moral for the story.

Summary

Data is mostly organised by making use of dedicated computer programmes.

Most such computer programmes require some form of coding.

Data can be sequenced by following an elaboration model.

Contingency tables are mostly used to achieve logic in scientific analysis.

Data is often analysed in a linear, chronological order.

Codes are typically short pieces of text referencing other pieces of text, graphical, audio or video data.

Codes:

  1. Capture meaning.
  2. Serve as tools for finding specific occurrences in the data.
  3. Help you to organise and structure the data.
  4. Classifies textual or other data units in related groups and at different levels of abstraction.

Dedicated computer software does not create new knowledge.

Five coding procedures are popularly used.

They are open coding, in vivo coding, coding by list, quick coding and free coding.

Open coding means creating new codes.

In vivo coding means creating a code for elected text as and when you come across text, or just a word in text, that can and should serve as a code.

Coding by list is used when you know where you are going with your study so that you can create the codes even before collecting data.

Quick coding means creating codes as you work through your data.

Free codes are codes that have not been used yet. They can be the result of coding by list or quick coding.

To the five coding procedures should be added axial coding and selective coding.

Axial coding is the process of putting data back together after it has been restructured by means of open coding.

Selective coding refers to the process of electing a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development.

You should always keep an open mind about your research and the codes that you create.

Close

If what I discussed here sounds confusing and alien, then it is probably because of what we discussed under schema analysis in my previous video.

It is unlikely that the level of language used here is beyond you.

If that were the case, you would not have watched this video.

No doubt you will understand everything if you watch this video again after having tried out one or two of the computer programmes that deal with especially qualitative research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 94: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 7 of 7 Parts

Written by Dr. Hannes Nel

What, do you think, is the biggest challenge for somebody who embarks on doctoral or master’s degree studies?

Well, the answer to this question will probably be different for different people, depending on their circumstances, perceptions, value systems and culture.

If we were to combine all the possible challenges, we will probably arrive at “to understand”.

In my opinion that is the biggest challenge facing any post-graduate student.

Not only do you need to understand endless concepts, phenomena, theories and principles, you also must explain them in your thesis or dissertation.

And on doctoral level you will be required to define and explain new concepts, phenomena, theories and principles.

Data analysis is necessary for such elucidation.

I discuss the following data analysis methods in this article:

  1. Schema analysis.
  2. Situational analysis.
  3. Textual analysis.
  4. Thematic analysis.

Schema analysis

Schema analysis requires that you simplify cognitive processes to understand complex concepts and narrative information more readily. In this manner a narrative that might otherwise be difficult to understand because of the level of language used, cultural differences or any other reason, is made easier to understand for those who might find the language challenging or the cultural context alien.

Schema analysis might require additional explanation, interpretation and reconstruction of the message. An individual who grew up in the city might not know how to milk a cow and a farmer might not know how to obtain food from a street vending machine. 

Today schema analysis is also used in computer programming, where a schema is the organisation or structure for a database. A schema is developed by modelling data.  The purpose remains the same as when you would have done schema analysis manually – it is a process of rendering data more user-friendly.

Situational analysis

As opposed to comparative analysis, situational analysis focuses more on non-human elements. It implies the analysis of the broad context or environment in which an event takes place. It can include an analysis of the state and condition of people and the ecosystem, including the identification of trends; the identification of major issues related to people and ecosystems that require attention and an analysis of key stakeholders.

Textual analysis

Textual analysis, also called ‘content analysis’, is a data collection technique as well as a data analysis technique. It helps us to understand information on symbolic phenomena. It is used to investigate symbolic content such as words that appear in, for example, newspaper articles, comments on a blog, political speeches, etc. It is a qualitative technique in which the researcher attempts to describe the denotative meaning of content in an objective way.  

There are two levels of meaning, namely denotative and connotative meaning. The denotative meaning of a word refers to the literal meaning that you will find in a dictionary. This meaning is free from any form of interpretation. The connotative meaning of a word refers to the connotation that we ascribe to a particular word, based on the feeling or idea that the word invokes in us, which is often based on our prior experiences.

For example, the denotative meaning of the word ‘host’ is ‘one who lodges or entertains a stranger or guest at his or her house’. However, a woman who was abused by a host in whose guest house she stayed in her youth might conjure up in her mind a host as being a dangerous and sly human being who takes advantage of vulnerable people. The connotative meaning of ‘host’ is, therefore, largely the opposite of what the word is supposed to mean. In textual analysis we only work with the denotative meaning of words to make valid and reliable assumptions of the data within context.

You can only work with what was reported when doing qualitative research and you should not make any assumptions about the originator’s intended meaning. The context in which the information was used, however, also needs to be taken into consideration.

Textual analysis can be subjective because its interpretation is done by fallible people. It can include the analysis of freshly collected data as well as transcribed data. You should transcribe all the raw data that you collected from the written and verbal responses of participants during conversations, interviews, focus groups, meetings, etc. Electronically recorded interviews will need to be retyped word for word to facilitate textual analysis.

Thematic analysis

Also known as concept analysis or conceptual analysis, it is actually a coding regime, according to which data is reduced by means of identifying certain themes. Thematic analysis uses deductive coding by grouping concepts under one of a prepared list of themes.

In thematic analysis you first need to familiarise yourself with the data before you can even select themes. You should list the themes that you would like to cover in your research when you do your literature review. After having listed themes, the next step would be to generate codes. Codes serve as an important foundation for the structuring and arrangement of data by means of qualitative computer software. Even though one might not call it coding, capturing information on cards is also a form of coding, even though rather simple and limited in usability.

You can also search for themes now if you did not do so as a first step already. This is done by collating the codes that you identified into potential themes. Themes are actually “headings” under which related or linked codes are grouped, or clustered. Most qualitative research computer software allows you to review and edit your codes and themes when necessary, which will inevitably happen as you progress with your research.

Summary

Schema analysis:

  1. Requires that you simplify cognitive processes.
  2. Might require additional explanation, interpretation and reconstruction of selected data.
  3. Is also used in computer programming.

Situational analysis:

  1. Focuses on non-human elements.
  2. Analysis the broad context or environment for the research.
  3. Can include an analysis of the state and condition of people and the ecosystem.

Textual analysis

  1. Combines data collection and analysis.
  2. Helps to understand information on symbolic phenomena.
  3. Attempts to objectively describe the denotative meaning of content.
  4. Takes the context in which information was used into consideration.
  5. Can be subjective.
  6. Can include the analysis of freshly collected as well as transcribed data.

Thematic analysis

  1. Is a coding regime.
  2. Reduces data in terms of certain themes.
  3. Requires the identification of themes before coding can be done.

Close

That concludes my articles on data analysis and all the other concepts and theories behind doctoral and master’s degree studies.

In the remaining 14 articles I will focus more on the structure and layout of a thesis or dissertation.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 93: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 6 of 7 Parts

Written by Dr. Hannes Nel

In academic research we need to think inductively and deductively.

Inductive thinking is used to develop a new theory.

Therefore, it is what you would mostly use when writing a dissertation for a doctoral degree.

And you should use inductive thematic analysis to analyse the data that you collect.

Deductive thinking is used to test existing theory.

Therefore, it is what you would mostly use when writing a thesis for a master’s degree.

And you should use retrospective analysis to analyse the data that you collect.

Narrative analysis uses both inductive and deductive thinking more or less equally.

That is why both a dissertation and a thesis can be written in a narrative format.

I will discuss the nature of inductive thematic analysis, narrative analysis and retrospective analysis in this article.

Inductive thematic analysis (ITA)

Inductive thematic analysis draws on inductive analytic methods. It involves reading through textual data and identifying and coding emergent themes within the data.

ITA requires the generation of free-flow data. The most common data collection techniques associated with ITA are in-depth interviews and focus groups. You can also analyse notes from participant observation activities with ITA, but interview and focus group data are better. ITA is often used in qualitative inquiry, and non-numerical computer software, specifically designed for qualitative research, is often used to code and group data.

Paradigmatic approaches that fit well with ITA include post-structuralism, rationalism, symbolic interactionism, and transformative research.

Narrative analysis

The word “narrative” is generally associated with terms such as “tale”, or “story”. Such stories are mostly told in the first person, although somebody else might also tell the story about a different character, that is in the second or third person. First person will apply if an interview is held. Every person has his or her own story, and you can design your research project to collect and analyse the stories of participants, for example when you study the lived experiences of somebody who is a member of a gang on the Cape Flats.

There are different kinds of narrative research studies ranging from personal experiences to oral historical narratives. Therefore, narrative analysis refers to a variety of procedures for interpreting the narratives obtained through interviews, questionnaires by email or post, perhaps even focus groups. Narrative analysis includes formal and structural means of analysis. One can, for example, relate the information obtained from a gang member in terms of circumstances and reasons why he or she became a gang member, growth into gang activities, the consequences of criminal activities for his or her personal life, career, etc. One can also do a functional analysis looking at gang activities and customs (crime, gang fights, recruiting new members, punishment for transgression of gang rules, etc.)

In the analysis of narrative, you will track sequences, chronology, stories or processes in the data, keeping in mind that most narratives have a backwards and forwards nature that needs to be unravelled in the process of analysing the data.

Like many other data collection approaches, narrative analysis, also sometimes called ‘narrative inquiry’, is based on the study and textual representation of discourse, or the analysis of words. The type of discourse or text used in narrative analysis is, as the name indicates, narratives.

The sequence of events can be generated and recorded during the data collection process, such as through in-depth interviews or focus groups; they can be incidentally captured during participant observation; or, they can be embedded in written forms, including diaries, letters, the internet, or literary works. Narratives are analysed in numerous ways and narrative analysis can be used in research within a substantial variety of social sciences and academic fields, such as sociology, management, labour relations, literature, psychology, etc.

Narrative analysis can be used for a wide range of purposes. Some of the more common usages include formative research for a subsequent study, comparative analysis between groups, understanding social or historical phenomena, or diagnosing psychological or medical conditions. The underlying principle of a narrative inquiry is that narratives are the source of data used, and their analysis opens a gateway to better understanding of a given research topic.

In most narratives meaning is conveyed at different levels, for example informational content level that is suitable for content analysis; textual level that is suitable for hermeneutic or discourse analysis, etc.

Narrative analysis has its own methodology. In narrative analysis you will analyse data in search of narrative strings (present commonalities running through and across texts), narrative threads (major emerging themes) and temporal/spatial themes (past, present and future contexts).

Retrospective analysis

Retrospective analysis is sometimes also called ‘retrospective studies’ or ‘trend analysis’ or ‘trend studies’. Retrospective analysis usually looks back in time to determine what kind of changes have taken place. For example, if you were to trace the development of computers over the past three decades, you would see some remarkable changes and improvements.

Retrospective analysis focuses on changes in the environment rather than in people, although changes in the fashions, cultures, habits, values, jobs, etc. are also often analysed. Each stage in a chronological development is represented by a sample and each sample is compared with the others against certain criteria.

Retrospective analysis examines recorded data to establish patterns of change that have already occurred in the hope of predicting what will probably happen in the future. Predicting the future, however, is not simple and often not accurate. The reason for this is that, as the environment changes, so do the variables that determine or govern the change. It, therefore, stands to reason that, the longer ahead one tries to predict the future, the more inaccurate will your predictions probably be.

Retrospective analysis does not include the same respondents over time, so the possibility exists for variation in data due to the different respondents rather than the change in trends.

Summary

Inductive thematic analysis, or ITA:

  1. Draws on inductive analytical methods.
  2. Involves reading textual data.
  3. Identifies and codes emergent themes within the data.
  4. Requires the generation of free-flow data.
  5. Favours in-depth interviews and focus groups.
  6. Can also use participant observation.
  7. Fits well with qualitative research and critical or interpretive paradigms.

Narrative analysis:

  1. Tells stories related by people.
  2. Ranges from personal experiences to historical narratives.
  3. Can use a wide range of data collection methods.
  4. Includes formal, structural and functional analysis.
  5. Tracks sequences, chronology, stories or processes in data.
  6. Is based on the textual representation of discourse, or the analysis of words.
  7. Is used by a substantial variety of social sciences.
  8. Can be used for a wide range of purposes.
  9. Conveys meaning on different levels.
  10. Has its own methodology.

Retrospective analysis:

  1. Looks back in time to identify change.
  2. Focuses on change in the environment.
  3. Represents and compares change in samples.
  4. Sometimes tries to predict the future.
  5. Does not include the same respondents over time.

Close 

It is a good idea to mention and explain how you analysed the data that you collected in your thesis or dissertation.

Ph. D. students will already do so in their research proposal.

That is why you need to know which data analysis methods are available and what they mean.

It will also help to ensure that you use the data that you collect efficiently and effectively to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 92: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Ethnographic analysis

Written by Dr. Hannes Nel

I wonder if ethnographic research was ever as vitally important as now.

The COVID-19 pandemic has dramatically changed the way people live, interact, socialise and survive.

No doubt, research on how to combat the virus is still the priority.

However, while numerous researchers are frantically working on finding an effective and safe vaccine, life goes on.

And it will take long before everybody is vaccinated anyway.

And we need to determine what the impact of unemployment, financial difficulties, famine, crime and the loss of loved ones on our psychological health is.

And we need to find ways in which to cope with the new reality.

I discuss ethnographic analysis in this article.

Ethnographic analysis typically addresses the issue of ‘what is going on’ between the participants in some segment (or segments) of the data, in great analytical depth and detail. Ethnographic studies aim to provide contextual, interpretive accounts of their participants’ social worlds.

Ethnographic analysis is rarely systematic or comprehensive: rather, it is selective and limited in scope. Its main advantage is to permit a detailed, partially interpretive, account of mundane features of the social world. This account may be limited to processes within the focus group itself, or (more typically) it may take the focus group discussion as offering a ‘window’ on participants’ lives.

Ethnographic analysis aims to ground interpretation in the particularities of the situation under study, and in ‘participants’ (rather than ‘analysts’) perspectives. Data are generally presented as accounts of social phenomena or social practices, substantiated by illustrative quotations from the focus group discussion. Key issues in ethnographic analysis are:

•           how to select the material to present,

•           how to give due weight to the specific context within which the material was generated, while retaining some sense of the group discussion as a whole, and

•           how best to prioritise participants’ orientation in presenting an interpretive account.

Researchers using ethnographic research, such as observing people in their natural settings, often ask the question what role the researcher should adopt when conducting research: an overt and announced role or a covert and secret role? The most common roles that you as the researcher may play are complete participation, participation as an observer, observer as a participant and complete observer.

The complete participant seeks to engage fully in the activities of the group or organisation being researched. Thus, this role requires you to enter the setting covertly so that the participants will not be aware of your presence or at least not aware that you are doing research on them. By doing research covertly you are supposed to be able to gather more accurate information than if participants were aware of what you are doing – they should act more naturally than otherwise. The benefit of the covert approach is that you should gain better understanding of the interactions and meanings that are held important to those regularly involved in the group setting. Covert research can, however, expose you to the risk that your efforts might prove unsuccessful, especially if the participants find out that you were doing research on them without them being informed and without their agreement. Such research can also lead to damage to the participants in many ways, for example by embarrassing them, damaging their career prospects, damaging their personal relationships, etc.

You will act ethically and more safely if you, as the researcher observe a group or individual and participate in their activities. In this case you formally make your presence and intentions known to the group being studied and you ask for their permission. This may involve a general announcement that you will be conducting research, or a specific introduction as the researcher when meeting the various people who will form part of the target group for the research.

This approach requires of you to develop sufficient rapport with the participants to gain their support and co-operation. You will need to explain to them why the research is important and how they will benefit from it. The possibility exists that you may become emotionally involved in the activities and challenges of the target group, which might have a negative effect on your ability to interpret information objectively.

The researcher as observer only is, as we already discussed, an etic approach. Here you will distance yourself from the idea of participation but still do your research openly and in agreement with the target group. Such transparent research often involves visiting just one site or a setting that is offered only once. It will probably be necessary to do relatively formal observation. The risk exists that you may fail to adequately appreciate certain informal norms, roles, or relationships and that the group might not trust you and your intentions, which is why the period of observation should not be too long.

The complete and unannounced observer tends to be a covert role. In this case, you typically remain in the setting for a short period of time but are a passive observer to the flow of activities and interactions.

Summary

Ethnographic analysis:

  1. Analyses events and phenomena in a social context.
  2. Is selective and limited in scope.
  3. Delivers a detailed interpretation of commonplace features of the social world.
  4. Focuses on specific aspects of the target group’s lives.

Key issues of ethnographic analysis are:

  1. How data to analyse is selected.
  2. The context on which the collection and analysis focuses.
  3. Interpretation and description of the findings by focusing on the target group’s orientation.

Observation is often used for the collection of data.

An emic or etic approach can be followed.

An etic approach is often also executed covertly.

Covert collection of data can promote accuracy because the target group for the research will probably behave naturally if they do not know that they are being observed.

A covert approach can be rendered inadvisable because of ethical considerations.

An overt approach requires gaining the trust of the target group for the research.

Close

You probably noticed that it is near impossible to discuss data collection and data analysis separately.

Besides, ethnography is a research method, and ethnographic data collection and analysis are part of the method.

Natural scientists will probably only use it to trace the ontology of scientific concepts or phenomena.

And then the data will be historical in nature.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 91: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis: Part 4 of 7 Parts: Elementary Analysis

Written by Dr. Hannes Nel

Most social qualitative research requires the analysis of several variables simultaneously (called “multivariate analysis”), for example the analysis of the simultaneous association of age, education, and gender would be an example of multivariate analysis. Specific techniques for conducting a multivariate analysis include factor analysis, multiple correlation, regression analysis, and path analysis. All techniques are based on the preparation and interpretation of comparative tables and graphs, so you should practise doing this if you do not already know how.

These are largely quantitative techniques. Fortunately, the statistical calculations are done for you by the computer, so just be aware of the definitions.

Factor analysis. Factor analysis is a statistical procedure used to uncover relationships among many variables. This allows numerous inter-correlated variables to be condensed into fewer dimensions, called factors. It is possible, for example, that variations in three or four observed variables mainly reflect the variations in a single unobserved variable, or in a reduced number of unobserved variables. Clearly this type of analysis is mostly numerical in nature. Factors are analysed inductively to determine trends, relationships, correlations, causes of phenomena, etc. Factor analysis searches for variations in response to variables that are difficult to observe and that are suspected to have an influence on events or phenomena.

Multiple correlation. Multiple correlation is a statistical technique that predicts values of one variable based on two or more other variables. For example, what will happen to the incidence of HIV AIDS (variable that we are doing research on) in a particular area if unemployment increases (variable 1), famine breaks out (variable 2) and the incidence of TB (variable 3) increases? 

Multiple correlation is a linear relationship among more than two variables. It is measured by the coefficient of multiple determination, which is a measure of the fit of a linear regression. A linear regression falls somewhere between zero and one (assuming a constant term has been included in the regression); a higher value indicates a stronger relationship between the variables, with a value of one indicating a perfect relationship and a value of zero indicating no relationship at all between the independent variables collectively and the dependent variable.

Path analysis. Path analysis can be a statistical method of finding cause/effect relationships, a method for finding the trail that leads users to websites or an operations research technique. We also have “critical path analysis” which is mostly used in project management and is a method by means of which activities in a project are planned to be executed in a logical sequence of events to ensure that the project is completed in an efficient and effective manner. We are concerned about path analysis as an operations research technique here.

Path analysis is a method of decomposing correlations into different pieces of interpretation of effects (e.g. how does parental education influence children’s income when they are adults?). Path analysis is closely related to multiple regression; you might say that regression is a special case of path analysis. It is a “causal model” because it allows us to test theoretical propositions about cause and effect without manipulating variables.

Regression analysis. Regression analysis can be used to determine which factors influence events, phenomena, or relationships.

Regression analysis includes a variety of techniques for modelling and analysing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. If, for example, you wish to determine the effect of tax, legislation and education on levels of employment, levels of employment will be the dependent variable while tax, legislation and education will be the independent variables. More specifically, regression analysis helps one understand how to maintain control over a dependent variable. In the level of employment example, you might wish to know what should be done in terms of tax, legislation and education to improve employment or at least to maintain a healthy level of employment. In this example it is of interest to characterise the variation of the dependent variable around the regression function, which can be described by a probability distribution (how much the level of employment would change and in what direction if all, some or one of the independent variables change by a particular value).

Regression analysis typically estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are held fixed. Seen from this perspective, the example of employment levels would mean investigating what would happen if tax, legislation and education remain unchanged.

Regression analysis is widely used for prediction and forecasting, although this should be done with circumspection. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, to explore the forms of these relationships. Regression analysis presupposes causal relationships between the independent and dependent variables, although investigation can also show that such relations do not exist. An example of using regression analysis, also called “multiple regression” is to determine which factors from colour, paper type, number of advertisements and content (independent variables) have the biggest effect on the number of magazines sold (dependent variable).

Summary

Multivariate analysis can be used for the analysis of several variables simultaneously.

Techniques that can be used for conducting multivariate analysis include factor analysis, multiple correlation, path analysis and regression analysis.

Factor analysis is used to uncover relationships among many variables.

Factors are analysed inductively to determine trends, relationships, correlations, cause of phenomena, etc.

Multiple correlation predicts values of one variable based on two or more other variables.

Multiple correlation is a linear relationship among more than two variables.

Path analysis seeks cause/effect relationships.

It can also be used to find data or to manage projects.

Regression analysis can be used to determine which factors influence events, phenomena or relationships.

It includes a variety of techniques for modelling and analysing several variables when the focus is on the relationship between a dependent variable and one or more independent variables.

Regression analysis helps us to understand how to maintain control over a dependent variable.

Close

Statistics are a wonderfully flexible way in which to analyse data.

Dedicated computer software can do the calculations for us and show us the numbers in tabular and graphic format.

All we need to do, is to analyse the numbers or graphs.

It is mostly quite easy to interpret visual material.

And you will impress your study leader, lecturer and other stakeholders in your research if you use such analysis techniques.

Most importantly, it will be so much easier and faster to come to conclusions and to derive valid and accurate findings from your conclusions.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 90: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Part 3 of 7 Parts

Written by Dr. Hannes Nel

I discuss conversation and discourse analysis as data collection methods in this article.

Conversation and discourse analysis

Both conversation and discourse analysis approaches stem from the ethnomethodological tradition, which is the study of the ways in which people produce recognisable social orders and processes. Both of these approaches tend to examine text as an “object of analysis”. Discourse analysis is a rather comprehensive process of evaluating the structures of conversations, negotiations and other forms of discourse as well as how people interact when communicating with one another. The sharing of meaning through discourse always takes place in a particular context so the social construction of such discourse can also be analysed.

Conversation and discourse analysis both study “naturally” occurring language, as opposed to text resulting from more “artificial” contexts, such as formal interviews. The purpose is to identify social and cultural meanings and phenomena from the discourse studied, which is why the process is suitable for almost any culture-related research.

The name “discourse” shows that it is language that is analysed while language is also used to do research. It can be a complex process and is often better suited to those more interested in theorising about life than those who want to research actual life events.

Discourse analysis focuses on the meaning of the spoken and written word, and the reasons why it is the way it is. Discourse refers to expressing oneself using words and to the variety and flexibility of language in the way language is used in ordinary interaction.

When doing research, we often look for answers in places or sources that we can easily reach when the real answers might lie somewhere else. Discourse analysis is one method which allows us to move beyond the obvious to the less obvious, although much more relevant sources of data.

Discourse analysis analyses what people say apart from just picturing facts. Discourses are ever-present ways of knowing, valuing and experiencing the world. Different people have different discourses. Gangs on the Cape Flats, for example, use words and sentences that the ordinary man on the street will find difficult to understand. Discourses are used in everyday texts for building power and knowledge, for regulation and normalisation, for the development of new knowledge and power relations.

As a language-based analytical process, discourse analysis is concerned with studying and analysing written texts and spoken words to reveal any possible relationships between language and social interaction. Language is analysed as a possible source of power, dominance, inequality and bias. Processes that may be the subject of research include how language is initiated, maintained, reproduced and transformed within specific social, economic, political and historical contexts. A wide variety of relationships and context can be investigated and analysed, including ways in which the dominant forces in society construct versions of reality that favour their interests, and to uncover the ideological assumptions that are hidden in the words of our written text or oral speech in order to resist, overcome or even capitalise on various forms of power. Criminals in a correctional facility will, for example, be included or excluded from gangs on account of certain ways of speech and codes that only they know.

Discourse analysis collects, transcribes and analyses ordinary talk and everyday explanations for social actions and interaction. It emphasizes the use of language as a way to construct social reality. Yin[1] defines discourse analysis as follows:

“Discourse analysis focuses on explicit theory formation and analysis of the relationships between the structures of text, talk, language use, verbal interaction or communication, on the one hand, and societal, political, or cultural micro- and macro-structures and cognitive social representations, on the other hand.”

Discourse analysis examines a discourse by looking at patterns of the language used in a communication exchange as well as the social and cultural contexts in which these communications occur. It can include counting terms, words, and themes. The relationship between a given communication exchange and its social context requires an appreciation and understanding of culturally specific ways of speaking and writing and ways of organising thoughts.

Oral communication always fits into a context which lends meaning to it. It always has a double structure, namely the propositional context (ontology) and the performatory content (epistemological meaning). Oral communication can, for example, be used with good effect to understand human behaviour, thought processes and points of view. 

The result of discourse analysis is a form of psychological natural history of the phenomena in which you are interested. To be of value for research purposes oral communication must be legitimate, true, justified, sincere and understandable. It should also be coherent in organisation and content and enable people to construct meaning in social context. Participants in oral communication should do so voluntarily and enjoy equal opportunity to speak.

Discourse analysis is a form of critical theory. You, as the researcher, need to ensure that the discourse and the participants in the discussion meet the requirements for such interaction. It will also be your duty to eliminate or at least reduce any forces or interventions that may disrupt the communication. Such discourse can also be taken further by having other participants in the research process elaborate and further analyse the results of initial communications. For this purpose, you need to be highly sensitive to the nuance of language.

Any qualitative research allows you to make use of coding and structuring of data by means of dedicated research software, such as ATLAS.ti or CAQDAS. This will enable you to discover patterns and broad areas of salient argumentation, intentions, functions, and consequences of the discourse. By seeking alternative explanations and the degree of variability in the discourse, it is possible to rule out rival interpretations and arrive at a fair and accurate comprehension of what took place and what it meant. 

Discourse analysis can also be used to analyse and interpret written communication on condition that the written communication is a written version of communication relevant to the topic being researched. This requires a careful reading and interpretation of textual material.

Discourse analysis has been criticized for its lack of system, its emphasis on the linguistic construction of a social reality, and the impact of the analysis in shifting attention away from what is being analysed and towards the analysis itself. Discourse is in actual fact a text in itself, with the result that it can also be analysed for meaning and inferences, which might lead to the original meaning of oral communication being eroded at the expense of accuracy, authenticity, validity and relevance. 

Conversation analysis is arguably the most immediate and most frequently used form of discourse analysis in the sense that it includes any face-to-face social interaction. Social interaction inevitably includes contact with other people and contact with other people mostly includes communication. People construct meaning through speech and text, and its object of analysis typically goes beyond individual sentences. Data on conversations can be collected through direct communication, which needs to be recorded by taking notes, making a video or electronic recording.

Conversation analysis is the study of talk in interaction and generally attempts to describe the orderliness, structure and sequential patterns of interaction, whether this is universal or a casual conversation. Conversation analysis is a way of analysing data and has its own methodological features. It studies the social organisation of two-way conversation through a detailed inspection of voice recordings and transcriptions made from such recordings, and relies much more on the patterns, structures and language used in speech and the written word than other forms of data analysis.

Conversation analysis assumes that it is fundamentally through interaction that participants build social context. The notion of talk as action is central to its framework. Within a focus group we can see how people tell stories, joke, agree, debate, argue, challenge or attempt to persuade. We can see how they present particular ‘versions’ of themselves and others for particular interactional purposes, for example to impress, flatter, tease, ridicule, complain, criticise or condone.

Participants build the context of their talk in and through the talk while talking. The talk itself, in its interactional context, provides the primary data for analysis. Further, it is possible to harness analytical resources intrinsic to the data: by focusing on participants’ own understanding of the interaction as displayed directly in their talk, through the conversational practices they use. In this way, a conversation analytic approach prioritises the participants’ (rather than the analysts’) analysis of the interaction.

Naturally occurring data, i.e. data produced independent of the researcher, encompass a range of universal contexts (for example classrooms, courtrooms, doctors’ surgeries, etc.), in which talk has been shown both to follow the conversations of ‘every-day’ conversation and systematically to depart from these.

Conversation analysis tends to be more granular than classical discourse analysis, looking at elements such as grammatical structures and concentrating on smaller units of text, such as phrases and sentences. An example of conversation analysis is where a researcher “eavesdrops” on the way in which different convicted criminals talk to other inmates to find a pattern in their cognitive thinking processes.

While conversation and discourse analysis are similar in several ways, there are some key differences. Discourse analysis is generally broader in what it studies, utilising pretty much any naturally occurring text, including written texts, lectures, documents, etc. An example of discourse analysis would be if a researcher were to go through transcripts or listen in on group discussions between convicted serial murderers to examine their patterns of reasoning.

The implications of discourse and conversation analysis for data collection and sampling are twofold. The first pertains to sample sizes and the amount of time and effort that goes into text analysis at such a fine level of detail, relative to thematic analysis. In a standard thematic analysis, the item of analysis may be a few sentences of text, and the analytic action would be to identify themes within that text segment. In contrast, linguistic-oriented approaches, such as conversation and discourse analysis, require intricate dissection of words, phrases, sentences and interaction among speakers. In some cases, tonal inflection is included in the analysis. Linguistic analysis, be it transcripts of conversations, interviews or any other form of communication, often consists of an abundance of material to analyse, which requires detailed analysis. This requires substantial time and effort, with the result that not too many samples can be processed in a reasonable time.

The data source inevitably determines the type and volume of analysis that can be done. Both discourse analysis and conversation analysis are interested in naturally occurring language. In-depth interviews and focus groups can be used to collect data, although they are not ideal if it is important to analyse social communication. Analysis of such data often requires reading and rereading material to identify key themes and other wanted information which would lead to meanings relevant to the purpose of the research. 

Existing documents, for example written statements made by convicted criminals, are excellent sources of data for discourse analysis as well as conversation analysis. In terms of field research, participant observation is ideal for capturing “naturally occurring” discourse. Minutes of meetings, written statements, transcripts of discussions, etc. can be used for this purpose. During participant observation, one can also record naturally occurring conversations between two or more people belonging to the target population for the study, for example two surviving victims of attacks by serial killers, two security guards who had experiences with attempted serial killings, etc. In many cases legal implications might make listening in to conversations difficult to do without running the risk of encountering legal problems.

Text can be any documentation, including personal reflections, books, official documents and many more. In action research this is enhanced with personal experiences, which can also be put on paper so that they often become historical data. In action research the research is given a more relevant cultural “flavour” by engaging participants from the community directly in the data collection and analysis. The emphasis is on open relationships with participants so that they have a direct say in how data is collected and interpreted. If participants decide that technical procedures such as sampling or skilled tasks such as interviewing should be part of the data collection and analysis process, they could draw on expert advice and training supplied by researchers.

Paradigmatic approaches that fit well with discourse and conversation analysis include constructivism, hermeneutics, interpretivism, critical theory, post-structuralism and ethnomethodology.

Summary

Discourse analysis:

  1. Evaluates the structures of conversations, negotiations and other forms of communication.
  2. Is dependent on context.
  3. Analyses and uses language.
  4. Focuses on the meaning of the spoken and written word.
  5. Allows the researcher to move from the obvious to the less obvious.
  6. Is concerned with studying and analysing written texts and spoken words to reveal the relationships between language and social interaction.
  7. Examines a discourse by looking at patterns of the language used.
  8. Delivers a form of psychological natural history of the phenomena being investigated.
  9. Is a form of critical theory.
  10. Is criticised for its lack of system, emphasis on the linguistic construction of social reality and the lack of focus on the research problem.

Conversation analysis:

  1. Is a form of discourse analysis.
  2. Includes face-to-face social interaction.
  3. Attempts to describe the orderliness, structure and sequential patterns of interaction.
  4. Has its own methodological features.
  5. Assumes that it is fundamentally through interaction that participants build social context.

Discourse and conversation analysis:

  1. Stem from the ethnomethodological tradition.
  2. Examine text as the object of analysis.
  3. Study naturally occurring language.
  4. Identify social and cultural meanings and phenomena.
  5. Require intricate dissection of words, phrases, sentences and interaction between people.

Close

The differences between discourse and conversation analysis are subtle.

Discourse analysis is broader than conversation analysis in the range of its analysis.

While conversation analysis tends to go into finer detail than discourse analysis.

Enjoy your studies.

Thank you.


[1] 2016: 69.

Continue Reading

ARTICLE 89: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis, Part 2 of 7

Written by Dr. Hannes Nel

Hello, I am Hannes Nel and I discuss comparative and content analysis in this article.

Although quite simple, comparative and content analysis are most valuable for your research towards a master’s degree or a Ph. D.

It does not matter what the topic of your research is – you will compare concepts, events or phenomena and you will study the content of existing data sources.

What you need to know, is how to analyse and use such data.

Comparative analysis

Comparative analysis is a means of analysing the causal contribution of different conditions to an outcome of interest. It is especially suitable for analysing situations of causal complexity, that is, situations in which an outcome may result from several different combinations of causal conditions. The diversity, variety and extent of an analysis can be increased, and the significance potential of empirical data can be improved through comparative analysis. The human element plays an important role in comparative research because it is often human activities and manifestations that are compared.

Although theoretical abstractions from reality can be and, in some instances are the only way in which to do valid comparison, the units of analysis can also be whole societies or systems within societies. Comparative research does not simply mean comparing different societies or the same society over time – it might involve searching systematically for similarities and differences between the cases under consideration.

Comparative researchers usually base their research on secondary sources, such as policy papers, historical documents or official statistics, but some degree of interviewing and observation could also be involved. A measure of verification is achieved by consulting more than one source on a particular issue.

Qualitative research approaches are most suitable for the conduct of comparative analysis, with the result that many paradigmatic approaches can be used. Examples include behaviourism, critical race theory, critical theory, ethnomethodology, feminism, hermeneutics and many more.

Content analysis

Content analysis is a systematic approach to qualitative data analysis, making it suitable to serve as the foundation of qualitative research software. It is an objective and systematic way in which to identify and summarise message content. The term ‘content analysis’ refers to the analysis of such things as books, brochures, written or typed documents, transcripts, news reports, visual media as well as the analysis of narratives such as diaries or journals. Although mostly associated with qualitative research approaches, statistical and other numerical data can also be analysed, making content analysis suitable for quantitative research as well. Sampling and coding are ubiquitous elements of content analysis.

The most obvious example of content analysis is the literature study that any researcher needs to do when preparing a research proposal as well as when conducting the actual research for a doctoral or master’s degree.

Especially (but not only) inexperienced students often think that volume is equal to quality, with the result that they include any content in their thesis or dissertations without even asking themselves if it is relevant to the research that they are doing. The information that you include in your thesis or dissertation must be relevant and it must add value to your thesis or dissertation.

We analyse the characteristics of language as communication regarding its content. This means examining words or phrases within a wide range of texts, including books, book chapters, essays, interviews and speeches as well as informal conversation and headlines. By examining the presence or repetition of certain words and phrases in these texts you are able to make inferences about the philosophical assumptions of a writer, a written piece, the audience for which a piece is written, and even the culture and time in which the text is embedded. Due to this wide array of applications, content analysis is used in literature and rhetoric, marketing psychology and cognitive science, etc.

The purpose of content analysis is to identify patterns, themes, biases and meanings. Classical content analysis will look at patterns in terms used, ideas expressed, associations among ideas, justifications, and explanations. It is a process of looking at data from different angles with a view to identifying key arguments, principles or facts in the text that will help us to understand and interpret the raw data. It is an inductive and iterative process where we look for similarities and differences in text that would corroborate or disprove theory or a hypothesis. A typical content analysis would be to evaluate the contents of a newly written academic book to see if it is on a suitable level and aligned with the learning outcomes of a curriculum.

Content analysis can also be used to analyse ethnographic data. Ethnographic data can be used to prove or disprove a hypothesis. However, in this case validity might be suspect, primarily because a hypothesis should be proven or rejected on account of valid evidence. Quantitative analysis is often regarded as more “scientific” and therefore more accurate than qualitative data. This, however, is a perception that only holds true if the quantitative data can be shown to be objective, accurate and authentic. Qualitative data that is sufficiently corroborated is often more valid and accurate than quantitative data based on inaccurate or manipulated statistics.

Content analysis would typically comprise of three stages: stating the research problem, collecting and retrieving the text and employing sampling methods, interpretation and analysis. Stating the problem will typically be done early in the thesis or dissertation. Collecting and retrieving text and employing sampling methods are typically the actual research process, which may include interviewing, literature study, etc.

It is a good idea to code your work as you write. Find one or more key words for every section and keep record of it. In this manner you will be able to find arguments that belong together more easily, and you will be able to avoid duplication of the same content at different places in your thesis or dissertation. Most dedicated computer software enables you to not only keep content with the same code together, but also to access and even print it. This is especially valuable for structuring the contents of your thesis or dissertation in a logical narrative format and to come to conclusions without contradicting yourself.

Content analysis sometimes incorporates a quantitative element. It is based on examining data for recurrent instances, i.e. patterns, of some kind. These instances are then systematically identified across the data set and grouped together. You should first decide on the unit of analysis: this could be the whole group, the group dynamics, the individual participants, or the participant’s utterances. The unit of analysis provides the basis for developing a coding system, and the codes are then applied systematically across a transcript. Once the data have been coded, a further issue is whether to quantify them via counting instances. Counting is an effective way in which to provide a summary or overview of the data set as a whole.

Interviewing is mostly used prior to doing content analysis, although literature study can also be used. Analysing data obtained through interviewing includes analysing data obtained from a focus group. This variation of content analysis usually begins by examining the text of similarly used words, themes, or answers to questions. Analysed data need to be arranged to fit the purpose of the research. This can, for example, be achieved by indexing data under certain topics or subjects or by using dedicated research software. In addition to individual ideas, the flow of ideas throughout the group should also be examined. It is, for example, important to determine which ideas enjoy the most support and agreement.

Paradigmatic approaches that fit well with content analysis include feminism, hermeneutics, interpretivism, modernism, post-colonialism and rationalism.

Summary

Comparative analysis:

  1. Analyses the conditions that lead to an outcome.
  2. Involves searching systematically for similarities and differences.
  3. Mostly uses secondary data sources.
  4. Is mostly used with qualitative research.

Theoretical abstracts can be used for comparative analysis.

Comparative analysis is used:

1.      To increase the diversity, variety and extent of an analysis.

2.      To analyse human activities.

3.      To analyse whole societies and systems within societies.

Content analysis:

  1. Can serve as the foundation for qualitative research.
  2. Can be used with qualitative and quantitative research.
  3. Extensively uses literature as data.
  4. Can also be used to analyse ethnographic data.

The purpose of content analysis is to identify patterns, themes, biases and meanings.

It would typically comprise of three stages: stating the research problem, collecting data, and analysing data.

Coding can be used with good effect in content analysis.

Close

You probably already noticed that the differences between different data analysis methods are just a matter of emphasis.

They share many elements.

For example, both comparative analysis and content analysis use literature as sources of data.

Both fit in better with qualitative research than with quantitative research.

This means that you can use more than one data analysis method to achieve the purpose of your research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 88: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Methods Part 1 of 7 Parts

Written by Dr. Hannes Nel

Isn’t life strange?

There are so many ways in which we can learn.

And the interrelatedness of events, phenomena and behaviour can be researched in so many ways.  

And we can discover truths and learn lesson by linking data, paradigms, research methods, dada collection and analysis methods.

And by changing the combination of research concepts, we can discover new lessons, knowledge and truths.

Research often deals with the analysis of data to discriminate between right and wrong, true and false.

Furthermore, people and life form a system with a multitude of links and correlations.

Consequently, we can learn by conducting research on even just an individual. 

I discuss the following two data analysis methods in this article:

  1. Analytical induction.
  2. Biographical analysis.

Analytical induction

Induction, in contrast to deduction, involves inferring general conclusions from particular instances. It is a way of gaining understanding of concepts and procedures by identifying and testing causal links between them. Analytical induction is, therefore, a procedure for analysing data which requires systematic analysis.

It aims to ensure that the analyst’s theoretical conclusions cover the entire range of the available data.

Analytical induction is a data analysis method that is often regarded as a research method. It uses inductive, as opposed to deductive reasoning. Qualitative data can be analysed without making use of statistical methods. The process to be explained and the factors that explain the phenomenon are progressively redefined in an iterative process to maintain a perfect relationship between them.

The procedure of analytical induction means that you, as the researcher, form an initial hypothesis or a series of hypotheses or a problem statement or question and then search for false evidence in the data at your disposal and formulate or modify your conclusions based on the available evidence. This is especially important if you work on a hypothesis, seeing that evidence can prove or refute a hypothesis.

Data are studied and analysed to generate or identify categories of phenomena; relationships between these categories are sought and working typologies and summaries are written based on the data that you examined. These are then refined by subsequent cases and through analysis. You should not only look for evidence that corroborates your premise but also for evidence that refutes it or calls for modification. Your original explanation or theory may be modified, accepted, enlarged or restricted, based on the conclusions to which the data leads you. Analytical induction will typically follow the following procedure:     

  1. A rough definition of the phenomenon to be explained is formulated.
  2. A hypothetical explanation of the phenomenon is formulated.
  3. A real-life case is studied in the light of the hypothesis, with the object of determining whether the hypothesis fits the facts in the case.
  4. If the hypothesis does not fit the facts, either the hypothesis is reformulated or the phenomenon to be explained is redefined, so that the case is excluded.
  5. Practical certainty may be attained after a small number of cases have been examined, but the discovery of negative evidence disproves the explanation and requires a reformulation.
  6. The procedure of examining cases, redefining the phenomenon, and reformulating the hypothesis is continued until a universal relationship is established, each negative case calling for a redefinition or a reformulation.
  7. Theories generated by logical deduction from a priori assumptions.

Paradigmatic approaches that can be used with analytical induction include all paradigms where real-life case studies are conducted, for example transformative research, romanticism, relativism, rationalism, post-structuralism, neoliberalism and many more.

Biographical analysis

Biographical analysis focuses on an individual. It would mostly focus on a certain period in a person’s life when she or he did something or was somebody of note. Biographical analysis can include research on individual biographies, autobiographies, life histories and the history of somebody told by those who know it. Data for a biographical analysis will mostly be archived documents or at least documents that belong in an archive. Interviews can also be used if the person is still alive or by interviewing people who knew the individual well when still alive.

Although biographical analysis mostly deals with prominent individuals, it can also deal with humble people, people with tragic life experiences, people from whose life experiences lessons can be learned, etc. Regardless of whether the individual is or was a prominent person or not, you as the researcher will need to collect extensive information on the individual, develop a clear understanding of the historical and contextual background, and have the ability to write in a good narrative format.

You can approach an autobiographical analysis as a classical biography or as an interpretive biography. A classical biography is one in which you, as the researcher, would be concerned about the validity and criticism of primary sources so that you will develop a factual base for explanations. An interpretive biography is a study in which your presence and your point of view are acknowledged in the narrative. Interpretive biographies recognise that in a sense, the writer ‘creates’ the person in the narrative.  

Summary

Analytical induction:

  1. Is a procedure for analysing data.
  2. Requires systematic analysis.
  3. Identifies and tests causal links between phenomena.
  4. Ensures complete coverage of data through theoretical conclusions.
  5. Is regarded as a research method by some.
  6. Progressively refines the explanation of phenomena.
  7. Searches for false information through hypothesis testing.
  8. Searches for relationships between phenomena.
  9. Modifies wrong conclusions.
  10. Identifies categories of phenomena.
  11. Enables the researcher to write and summarise working typologies.

Biographical analysis:

  1. Focuses on the individual.
  2. Can include research on individual biographies, autobiographies and life histories.
  3. Mostly fall back on archival documents.
  4. Can deal with anybody’s experiences from which others can gain value and learn lessons.
  5. Can be a classical or interpretive biography.

Close

In this video, we saw how we can gain knowledge by testing the validity, authenticity and accuracy of data.

We also saw that we can learn from the experiences of others.

There are many other ways in which we can discover knowledge by analysing existing data.

We will discuss them in the six articles following on this one.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 87: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Through Coding

Written by Dr. Hannes Nel

Introduction

Hello, I am Hannes Nel and I introduce the data analysis process and ways in which to analyse data in this article. 

You need to know what the different data analysis methods mean if you are to conduct professional academic research. There are a range of approaches to data analysis and they share a common focus. Initially most of them focus on a close reading and description of the collected data. Over time, they seek to explore, discover, and generate connections and patterns underlying the data.

You would probably need to code the data that you collect before you will be able to link it to the problem statement, problem question or hypothesis for your research. Making use of dedicated computer software would be the most efficient way to do this. However, even if you arrange and structure your data by means of more basic computer software, such as Microsoft Excel or, even more previous century, cards on which you write information, you will still be coding the data.

The fundamentals of data analysis

The way you collect, code and analyse data would largely depend on the purpose of your research. Quantitative and qualitative data analysis are different in many ways. However, the fundamentals of data analysis can mostly be applied to both. In the case of quantitative research, the principles of natural science and the tenets of mathematics can often be added to the fundamentals. Therefore, the fundamentals that I discuss here refer mostly to qualitative research and the narrative parts of quantitative research reports. For our purposes a research report can be a thesis or dissertation.

You should “instinctively” recognise possible codes and groupings by just focusing on the research problem statement or hypothesis. Even so, the following hints, or fundamentals on collecting and analysing data remain more or less the same, regardless of which data analysis method and dedicated computer software you may use:

  1. Always start by engaging in close, detailed reading of a sample of your data. Close, detailed reading means looking for key, essential, striking, odd, interesting, repetitive things people or texts say or do. Try to identify a pattern, make notes, jot down remarks, etc.
  2. Always read and systematically code your collection of data. Code key, essential, striking, odd, linked or related and interesting things that are relevant to your research topic. You should use the same code for events, concepts or phenomena that are repeated many times or are similar in terms of one or more characteristics. These codes can be drawn from ideas emerging from your close, detailed reading of your collection of data, as well as from your prior reading of empirical and theoretical works. Review your prior coding practices with each new application of a code and see if what you want to code fits what has gone before. Use the code if it is still relevant or create a new code if the old one is no longer of value for your purposes. You may want to modify your understanding of a code if it can still be of value, even if the original reason why you adopted it changed or has diminished in significance.
  3. Always reflect on why you have done what you have done. Prepare a document that lists your codes. It might be useful to give some key examples, explain what you are trying to get at, what sort of things should go together under specific codes. Dedicated computer software offers you a multitude of additional functions with which you can sort, arrange, and manipulate objects, concepts, events or phenomena, for example memoranda, quotations, super codes, families, images, etc.

Memoranda can be separate “objects” in their own right that can be linked to any other object.

Quotations are passages of text which have been selected to become free quotations.

Super codes can be queries that typically consists of several combined codes.

And families are clusters of primary documents (PDs)), images that belong together, etc.

  • Always review and refine your codes and coding practices. For each code, accumulate all the data to which you gave the code. Ask yourself whether the data and ideas collected under this code are coherent. Also ask yourself what the key properties and dimensions of all the data collected under the code are. Try to combine your initial codes, look for links between them, look for repetitions, exceptions and try to reduce them to key ones. This will often mean shifting from verbatim, descriptive codes to more conceptual, abstract and analytical codes. Keep evaluating, adjusting, altering and modifying your codes and coding practices. Go back over what you have already done and recode it with your new arguments or ideas.
  • Always focus on what you feel are the key codes and the relationship between them. Key codes should have a direct bearing on the purpose of your research. Make some judgements about what you feel are the central codes and focus on them. Try to look for links, patterns, associations, arrangements, relationships, sequences, etc.
  • Always make notes of the thinking behind why you have done what you have done. Make notes on ideas that emerge before or while you are engaged in coding or reading work related to your research project. Make some diagrams, tables, maps, models that enable you to conceptualise, witness, generate and show connections and relationships between codes.
  • Always return to the field with the knowledge you have already gained in mind and let this knowledge modify, guide or shape the data you want to collect next. This should enable you to analyse the data that you collected and sorted, to do some deconstruction and create new knowledge. Creating new knowledge requires deep thinking and thorough background knowledge of the topic of your research.

How data analysis should be approached

When undertaking data analysis, you need to be prepared to be led down novel and unexpected paths, to be open to new interpretations and to be fascinated. Potential ideas can emerge from any quarter – from your reading, your knowledge of the field, engagements with your data, conversations with colleagues or people whom you interview. You need to be open-minded enough to change your preconceived ideas and to let the information change your mind. You also need to listen to and value your intuition. Most importantly, you need to develop the ability to come to logical conclusions from the information at your disposal.

Do not try to twist conclusions on the data that you gather to suit your opinion or preferences. Your computer allows you to return to what you previously wrote and to change it. This will often be necessary if you are to develop scientifically founded new knowledge. Your conclusions and ideas might change repeatedly as you collect new information.       

Do not be frustrated if, as you progress with your research, you find that the codes on which you decided initially no longer work. Again, you can easily change your codes on computer or cards. You must do this in the interests of conducting scientific research. You will typically allocate primary codes to the issues that you regard as important and sub-codes to less important data or further elaborations on your main arguments. You can change this and change your coding structure if necessary.

The process of coding requires skill, confidence and a measure of diligence. Pre-coding is advisable, but you still need to accept that the codes that you decided upon in advance will probably change as you work through the data that you collect.

At some point you need to start engaging in a more systematic style of coding. You can work on paper when starting with the coding, although there is no reason why you can’t start to work on computer from the word go, seeing that you can change your codes on computer at any time with relative ease. Besides, you can make backups of your coding on computer. This can be valuable if, at some stage, you discover that your initial or earlier codes work better than the new ones after all. You can then return to a previous backup without having to redo all the work that you already did.

You need to understand how the computer software that you are using works and what it can provide you with. Different software has different purposes and ways in which codes can be used. It serves no purpose claiming to have used a particular software if you do not really understand how it works, how you should use it and what it can offer you. Previous students will not always be able to teach you the software because most of the software is rewritten all the time. Rather do a formal course on the latest version of the software that you wish to use.

Summary

Most data analysis methods share a common focus.

Data analysis is simplified by coding the data and making use of dedicated computer software.

You can also use coding with simple data analysis methods, for example Microsoft Excel or a card system.

The fundamentals of data analysis apply to qualitative and quantitative research.

You should code data by focusing on the purpose of your research and the research problem statement, question or hypothesis.

The following are the fundamentals of data analysis through coding: Always:

  1. Start by engaging in close, detailed reading of a sample of your data.
  2. Read and systematically code your collection of data.
  3. Reflect on why you have done what you have done.
  4. Review and refine your codes and coding practices.
  5. Focus on what you feel are the key codes and the relationship between them.
  6. Make notes of the thinking behind why you have done what you have done.
  7. And always return to the field with the knowledge you have already gained in mind and let this knowledge modify, guide or shape the data you want to collect next.

In addition to the fundamentals, you should also adhere to the following requirements for the analysis and coding of data:

  1. Be flexible and keep an open mind.
    1. Learn how to come to objective and logical conclusions from the data that you analyse.
    1. Change your codes at any stage during your research if it becomes necessary.
    1. Develop your data analysis coding skills, confidence and diligence.
    1. Acquire a good understanding of the computer software that you will use for data analysis.
    1. Work systematically.

Close

You will use the fundamentals of data analysis and coding with most data analysis methods.

Almost all recent dedicated data analysis software use coding.

I will discuss the following analysis methods in my next seven or eight videos:

  1. Analytical induction.
  2. Biographical analysis.
  3. Comparative analysis.
  4. Content analysis.
  5. Conversation and discourse analysis.
  6. Elementary analysis.
  7. Ethnographic analysis.
  8. Inductive thematic analysis (ITA).
  9. Narrative analysis.
  10. Retrospective analysis.
  11. Schema analysis.
  12. Situational analysis.
  13. Textual analysis.
  14. Thematic analysis.
Continue Reading

ARTICLE 11: The Table of Contents of your Thesis or dissertation

Written by Dr. Hannes Nel

Introduction

I discuss the layout of a table of contents for a thesis or dissertation in this article. In the beginning, the table of contents will be more a structure for a table of contents than a final one.

You will probably have decided which chapters to include in your report, but you will have only one or two lower-level headings. Also, you might need to add a small number of chapters as you progress with your research.

The table of contents should follow directly after the authentication of your work.

Once you have written your thesis or dissertation, you will probably delete the provisional structure for a table of content and replace it with the chapters, headings and sub-headings of your final thesis or dissertation. Keep in mind that your table of contents must not differ from the chapters, headings and sub-headings in your thesis or dissertation.

At the end of your table of contents, you should also have the references that you consulted, a list of figures and a list of tables.

Universities are mostly flexible about the structure of a table of contents for a thesis on the master’s degree level. There are certain chapters and topics that you must cover in the dissertation for a Ph. D.

Also, keep in mind that the thesis for a master’s degree is a good opportunity to practice for when you will write the dissertation for a Ph. D. It will not be wrong to follow the structure of a dissertation when writing the report on the master’s degree level.

Here is a list of the most basic headings that most universities will expect you to discuss in your dissertation:

  1. Title page.
  2. Confirmation of authenticity.
  3. Acknowledgments.
  4. Abstract.
  5. Chapter 1: Contextualising the Study.
  6. Chapter 2: Research Methodology.
  7. Chapter 3: Theoretical Background.
  8. Chapter 4: Data Collection and Analysis.
  9. Chapter 5: Synthesis and Evaluation of the Study.
  10. References.
  11. List of Figures.
  12. List of Tables.

The title page. I already discussed the title page, sometimes also called the cover page, in a previous article (article 5). Just take note that this is where it will fit into your thesis or dissertation.

Confirmation of authenticity. You will be required by the university to confirm that the contents of your thesis or dissertation are your own. Most universities, if not all, use a standard format for such confirmation.

Here is an example:

“I, (your full names and surname) declare that (the title of your thesis or dissertation) is my own work and that all the sources that I have used or quoted have been indicated and acknowledged by means of complete references.

(Your signature)

…………………………………”

Acknowledgments. Acknowledgments are a matter of choice.

However, it is only good manners to thank people who helped you with your research.

The acknowledgment has real value for your research, though.

  1. It shows the readers of your report that you conducted your research in a systematic, ethical and disciplined manner.
  2. It shows that you understand that research should not be done by one person only.

Abstract. The abstract is a mandatory summary of your thesis or dissertation. Not all universities will require you to write an abstract for a thesis. The abstract must be short – you will be required to summarise your thesis or dissertation in three or four pages.

Some readers, for example, your sponsors, might read only the abstract. Therefore, you will need to ensure that you cover all the questions that they might have.

Chapter 1: Contextualising the Study. Researchers making use of technicist research methods often claim that their findings and the principles and concepts that they develop are timeless and that it applies independently of context.

Even they, however, need to define the range and scope of their research – they will not be able to include the entire world, let alone the entire universe, in their research projects.

Chapter 2: Research Methodology. In this chapter you will discuss:

  1. The research approach that you will use.
  2. The research methods that you will use.
  3. The paradigmatic approaches that you will follow.
  4. The data collection methods that you will use.
  5. How you will analyse the data that you collect.

Chapter 3: Theoretical Background. You will probably need to do a literature study as a foundation for your research. It would be rather difficult to jump into data collection and the analysis of data if you do not know what you should be looking for.

Chapter 4: Data Collection and Analysis. You already discussed the data collection and analysis methods that you will use in Chapter 2 of your dissertation. Here you will need to discuss the actual processes of data collection and analysis. This is a critically important chapter and might even be broken down into two or three separate chapters. It is from the contents of this chapter that you will come to conclusions and findings from which to develop a solution to the problem that you investigated.

Chapter 5: Synthesis and Evaluation of the Study. Chapter 5 will normally be your final chapter. This is where you will describe your solution. Depending on the purpose of your research and the research approach and methods that you used, you might develop a model, new knowledge, new methods to combat oil pollution at sea, new medication, and many more.

References. All sources that you consulted must be acknowledged in your thesis or dissertation.

Universities invariably have prescriptions in this regard, and you should abide by them.

I will discuss referencing formats in a future article.

List of Figures and List of Tables. The lists of figures and tables follow directly after the table of contents.

One can regard it as part of the table of contents.

The figure and table numbers in the lists must be the same as in the content of the thesis or dissertation.

Different universities have different requirements for the layout and format of the lists of figures and tables, although most are flexible in this respect.

Summary

Your provisional table of contents will probably be just a structure, consisting of chapters with no lower-level headings.

Your actual and final table of contents must align exactly with the contents of your thesis or dissertation.

I will discuss the abstract, chapters, references, lists of figures and tables in more detail in separate articles following on this one.

Good luck with your studies and stay healthy and safe.

Continue Reading