ARTICLE 95: Research Methods for Ph. D. and Master’s Degree Studies: Methods for Organising and Analysing Data Part 1 of 2 Parts

Written by Dr. Hannes Nel

Data needs to be organised before it can be analysed.

Depending on whether a qualitative or quantitative approach is followed, the data needs to be arranged in a logical sequence or quantified.

This can be done by quantifying, sequencing, coding or memoing the data.

I discuss quantifying, sequencing and coding data in this article.

I will discuss memoing data in my second video on methods for organising and analysing data.

Quantifying data. Most data analysis today is conducted with computers, ranging from large, mainframe computers to small, personal laptops. Many computer programs are dedicated to analysing social science data, and it would be worth your while obtaining and learning to use such software if you need to write a thesis or dissertation, even if you do not exclusively use quantitative research methodology, because you might need to interpret some statistics or you might use some quantitative methods to enhance, support or corroborate your qualitative findings. However, you will probably need not much more than office software if you need to do largely qualitative research.

Almost all research software requires some form of coding. This can differ substantially from one software program to the next, so you will need to find out exactly how it works even before you purchase the software. Your study leader will probably know which software will be the most suitable for your research and give you advice on this. You will only quantify data if statistical analysis is necessary, so do not do this unless you know that you will need it in your thesis or dissertation.

Many people are intimidated by empirical research because they feel uncomfortable with mathematics and statistics. And indeed, many research reports are filled with unspecified computations. The role of statistics in research is quite important, but unless you write an assignment or thesis on statistics or mathematics you will not be assessed on your statistical or mathematical proficiency. That is why most universities offer statistical services. There are several private and public universities also offering such services, so use them. There is also nothing wrong with purchasing dedicated software to do your statistical analysis with, although it might be necessary to do a course on the software before you will be able to utilise it properly.

Sequencing the data. Many researchers are of the opinion that organising the data in a specific sequence offers the clearest available picture of the logic of causal analysis in research. This is called the elaboration model. Especially using contingency tables, this method portrays the logical process of scientific analysis.

When collecting material for interpretive analysis, you experience events, or the things people say in a linear, chronological order. When you then immerse yourself in field notes or transcripts, the material is again viewed in a linear sequence. This sequence can be broken down by inducing themes and coding concepts so that events or remarks that were far away from each other in a document, or perhaps even different documents, are now brought close together. This gives you a fresh view on the data and allows you to carefully compare sections of text that appear to belong together. At this stage, you are likely to find that there are all sorts of ways in which extracts that you grouped together under a single theme, differ, or that there are all kinds of sub-issues and themes that come to light.

Exploring themes more closely in this way is called elaboration. The purpose is to capture the finer nuances of meaning not captured by your original, possibly crude, coding system. This is also an opportunity to revise the coding system – either in small ways or drastically.  If you use software it might even be necessary to start your coding all over again. This can be extremely time-consuming, but at least every time you start over you end up with a much better structured research report.  

Coding. In most qualitative research, the original text is a set of field notes, data obtained through literature study, interviews, and focus groups. One of the first steps that you will need to take before studying and analysing data is to code the information. You can use cards for this, but dedicated computer software can save you time, effort and costs. Codes are typically short pieces of text referencing other pieces of text, graphical, audio, or video data. From a methodological standpoint, codes serve a variety of purposes. They capture meaning in the data. They also serve as tools for finding specific occurrences in the data that cannot be found by simple text-based search techniques. Codes also help you organise and structure the data that you collected.

Their main purpose is to classify many textual or other data units in such a manner that the data that belongs together can be grouped as such for easy analysis and structuring. One can, perhaps, think of coding as “indexing” your data. You can also see it as a way to mark keywords so that you can find, retrieve and group them more easily at a later stage. The length of a code should be restricted and should not be too long-winded.

Codes can also be used to classify data at different levels of abstraction, to group sets of related information units together for the purpose of comparison. This is what you would often use to consider and compare related arguments to make conclusions that can be the motivation for new knowledge. Dedicated computer software does not create new knowledge; it only helps you as the researcher to structure existing knowledge and experiences in such a manner that it will be easier for you to think creatively, that is to create new knowledge.

Formal coding will be necessary if you make use of dedicated research software. Even if you do not use research software you probably will need a method of coding to arrange your data according to the structure of your thesis or a dissertation. Your original data will probably include additional data, such as the time, date and place where the data was collected.

It is also a purpose of coding data to move to a higher conceptual level. The codes will inevitably represent the meanings that you infer from the original data, thereby moving closer towards the solution of your problem statement, or confirmation or rejection of your null hypothesis. By coding data, you will, of course, rearrange the data that you collected under different headings representing steps in the research process.

Five coding procedures are popularly used: open coding, in vivo coding, coding by list, quick coding and free coding.

With most qualitative research software, you can create codes first and then link them to sections in your data. Creating new codes is called open coding. The nature of the initial codes, which can be referred to as Level 1 codes or open codes, can vary and might change as you progress with your research. You should give a name for each new code that you open, and you can usually create one or more codes in a single step. These codes can stick closely to the original data, perhaps even reusing the exact words in the original data. Such codes can be deduced from research questions. In vivo coding is mostly used for this purpose. 

In vivo coding means creating a code for selected text as and when you come across text, or just a word in the text, that can and should serve as a code. This would normally be a word or short piece of text that would probably appear in other pieces of data that should be linked and grouped with the data in which you identified the code.

If you know where you are going with your study, you will probably create codes first (up front), then link them to sections of data. This would be coding by list. Coding by list allows you to select existing codes from a code list that you prepared in advance. You would typically select one or more codes associated with the current data selection.

You can also create codes as you work through your data, which would then be quick coding. In the case of quick coding, you will continue with the selected code that you are working with. This is an efficient method for the consecutive coding of segments using the most recently used code.

You can create codes that have not yet been used for coding or creating networks. Such codes are called free codes and they are a form of quick coding, although they can be prepared in advance. The reasons why you would create free codes can be:

  1. To prepare a stock of predefined codes in the framework of a given theory. This is especially useful in the context of teamwork when creating a base project.
  2. To code in a “top-down” (or deductive) way with all necessary concepts already at hand. This complements the “bottom-up” (or inductive) open coding stage, in which concepts emerge from the data.
  3. To create codes that come to mind during normal coding work and that cannot be applied to the current segment but will be useful later.

It will be easier to code data if you already have a good idea of what you are trying to achieve with your research. Sometimes the data will actually “steer” you towards codes that you did not even think of in the beginning. This is typical of a grounded theory approach, although you should always keep an open mind about your research, regardless of which approach you follow. Coding also helps to develop a schematic diagram of the structure of your thesis or dissertation. This can be based on your initial study proposal. A mindmap can, for example be used to structure your research process and to identify initial codes to start with.

A code may contain more than a single word but should be concise. There should be a comment area on your screen that you can use to write a definition for each code, if you need one. As you progress in doing the first level coding, you may start to understand how your data might relate to broader conceptual issues. Some of your field experiences may in fact be sufficiently similar so that you might be able to group different coded data together on a higher conceptual level. Your coding has then proceeded to a higher set of codes, referred to as Level 2 or category codes.

After a code has been created, it appears as a new entry in several locations (drop-down list, code manager). In this respect the following are important to remember:

  1. Groundedness: Groundedness refers to the number of quotations associated with the code. Large numbers indicate strong evidence already found for this code.
  2. Density: The number of codes connected to this code is indicated as the density. Large numbers can be interpreted as a high degree of theoretical density.
  3. Comment: The tilde character “~” can, as an example, be used to flag commented codes. It is not used for codes only but for all commented objects.

It is not only text that can be coded. You can also code graphic documents, audio and video material. There are many other ways in which codes can be utilised, for example they can be sorted, modified, renamed, deleted, merged and of course reported.

Axial coding. Axial coding is the process of putting data back together after it has been restructured by means of open coding. Open coding allows you to select data that belong together (under a certain code or sub-code) taken from a variety of sources containing the original or primary data. Categories of data are, thus, systematically developed and linked with subcategories. You can then develop a new narrative through a process of reconstruction. The new narrative might apply to a different context and should be articulated to the purpose of your research.

The articulation of selected data can typically relate to a condition, strategy or consequences. Data relating to a condition or strategy should address conditions that lead to the achievement of the purpose of the study. The purpose of the study will always be to solve a problem statement or question or to prove or disprove a null hypothesis. Consequential data include all outcomes of action or interaction.

Selective coding. Selective coding refers to the process of selecting a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development. Categories are, thus, integrated and refined. The core category would be the central phenomenon to which all the other categories are linked. To use a romantic example, in a novel you will identify the plot first, then the storyline, which you should analyse to identify the elements of the storyline that relate to the plot. From this you should be able to deduce lessons learned or a moral for the story.

Summary

Data is mostly organised by making use of dedicated computer programmes.

Most such computer programmes require some form of coding.

Data can be sequenced by following an elaboration model.

Contingency tables are mostly used to achieve logic in scientific analysis.

Data is often analysed in a linear, chronological order.

Codes are typically short pieces of text referencing other pieces of text, graphical, audio or video data.

Codes:

  1. Capture meaning.
  2. Serve as tools for finding specific occurrences in the data.
  3. Help you to organise and structure the data.
  4. Classifies textual or other data units in related groups and at different levels of abstraction.

Dedicated computer software does not create new knowledge.

Five coding procedures are popularly used.

They are open coding, in vivo coding, coding by list, quick coding and free coding.

Open coding means creating new codes.

In vivo coding means creating a code for elected text as and when you come across text, or just a word in text, that can and should serve as a code.

Coding by list is used when you know where you are going with your study so that you can create the codes even before collecting data.

Quick coding means creating codes as you work through your data.

Free codes are codes that have not been used yet. They can be the result of coding by list or quick coding.

To the five coding procedures should be added axial coding and selective coding.

Axial coding is the process of putting data back together after it has been restructured by means of open coding.

Selective coding refers to the process of electing a core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development.

You should always keep an open mind about your research and the codes that you create.

Close

If what I discussed here sounds confusing and alien, then it is probably because of what we discussed under schema analysis in my previous video.

It is unlikely that the level of language used here is beyond you.

If that were the case, you would not have watched this video.

No doubt you will understand everything if you watch this video again after having tried out one or two of the computer programmes that deal with especially qualitative research.

Enjoy your studies.

Thank you.

Continue Reading

ARTICLE 87: Research Methods for Ph. D. and Master’s Degree Studies: Data Analysis Through Coding

Written by Dr. Hannes Nel

Introduction

Hello, I am Hannes Nel and I introduce the data analysis process and ways in which to analyse data in this article. 

You need to know what the different data analysis methods mean if you are to conduct professional academic research. There are a range of approaches to data analysis and they share a common focus. Initially most of them focus on a close reading and description of the collected data. Over time, they seek to explore, discover, and generate connections and patterns underlying the data.

You would probably need to code the data that you collect before you will be able to link it to the problem statement, problem question or hypothesis for your research. Making use of dedicated computer software would be the most efficient way to do this. However, even if you arrange and structure your data by means of more basic computer software, such as Microsoft Excel or, even more previous century, cards on which you write information, you will still be coding the data.

The fundamentals of data analysis

The way you collect, code and analyse data would largely depend on the purpose of your research. Quantitative and qualitative data analysis are different in many ways. However, the fundamentals of data analysis can mostly be applied to both. In the case of quantitative research, the principles of natural science and the tenets of mathematics can often be added to the fundamentals. Therefore, the fundamentals that I discuss here refer mostly to qualitative research and the narrative parts of quantitative research reports. For our purposes a research report can be a thesis or dissertation.

You should “instinctively” recognise possible codes and groupings by just focusing on the research problem statement or hypothesis. Even so, the following hints, or fundamentals on collecting and analysing data remain more or less the same, regardless of which data analysis method and dedicated computer software you may use:

  1. Always start by engaging in close, detailed reading of a sample of your data. Close, detailed reading means looking for key, essential, striking, odd, interesting, repetitive things people or texts say or do. Try to identify a pattern, make notes, jot down remarks, etc.
  2. Always read and systematically code your collection of data. Code key, essential, striking, odd, linked or related and interesting things that are relevant to your research topic. You should use the same code for events, concepts or phenomena that are repeated many times or are similar in terms of one or more characteristics. These codes can be drawn from ideas emerging from your close, detailed reading of your collection of data, as well as from your prior reading of empirical and theoretical works. Review your prior coding practices with each new application of a code and see if what you want to code fits what has gone before. Use the code if it is still relevant or create a new code if the old one is no longer of value for your purposes. You may want to modify your understanding of a code if it can still be of value, even if the original reason why you adopted it changed or has diminished in significance.
  3. Always reflect on why you have done what you have done. Prepare a document that lists your codes. It might be useful to give some key examples, explain what you are trying to get at, what sort of things should go together under specific codes. Dedicated computer software offers you a multitude of additional functions with which you can sort, arrange, and manipulate objects, concepts, events or phenomena, for example memoranda, quotations, super codes, families, images, etc.

Memoranda can be separate “objects” in their own right that can be linked to any other object.

Quotations are passages of text which have been selected to become free quotations.

Super codes can be queries that typically consists of several combined codes.

And families are clusters of primary documents (PDs)), images that belong together, etc.

  • Always review and refine your codes and coding practices. For each code, accumulate all the data to which you gave the code. Ask yourself whether the data and ideas collected under this code are coherent. Also ask yourself what the key properties and dimensions of all the data collected under the code are. Try to combine your initial codes, look for links between them, look for repetitions, exceptions and try to reduce them to key ones. This will often mean shifting from verbatim, descriptive codes to more conceptual, abstract and analytical codes. Keep evaluating, adjusting, altering and modifying your codes and coding practices. Go back over what you have already done and recode it with your new arguments or ideas.
  • Always focus on what you feel are the key codes and the relationship between them. Key codes should have a direct bearing on the purpose of your research. Make some judgements about what you feel are the central codes and focus on them. Try to look for links, patterns, associations, arrangements, relationships, sequences, etc.
  • Always make notes of the thinking behind why you have done what you have done. Make notes on ideas that emerge before or while you are engaged in coding or reading work related to your research project. Make some diagrams, tables, maps, models that enable you to conceptualise, witness, generate and show connections and relationships between codes.
  • Always return to the field with the knowledge you have already gained in mind and let this knowledge modify, guide or shape the data you want to collect next. This should enable you to analyse the data that you collected and sorted, to do some deconstruction and create new knowledge. Creating new knowledge requires deep thinking and thorough background knowledge of the topic of your research.

How data analysis should be approached

When undertaking data analysis, you need to be prepared to be led down novel and unexpected paths, to be open to new interpretations and to be fascinated. Potential ideas can emerge from any quarter – from your reading, your knowledge of the field, engagements with your data, conversations with colleagues or people whom you interview. You need to be open-minded enough to change your preconceived ideas and to let the information change your mind. You also need to listen to and value your intuition. Most importantly, you need to develop the ability to come to logical conclusions from the information at your disposal.

Do not try to twist conclusions on the data that you gather to suit your opinion or preferences. Your computer allows you to return to what you previously wrote and to change it. This will often be necessary if you are to develop scientifically founded new knowledge. Your conclusions and ideas might change repeatedly as you collect new information.       

Do not be frustrated if, as you progress with your research, you find that the codes on which you decided initially no longer work. Again, you can easily change your codes on computer or cards. You must do this in the interests of conducting scientific research. You will typically allocate primary codes to the issues that you regard as important and sub-codes to less important data or further elaborations on your main arguments. You can change this and change your coding structure if necessary.

The process of coding requires skill, confidence and a measure of diligence. Pre-coding is advisable, but you still need to accept that the codes that you decided upon in advance will probably change as you work through the data that you collect.

At some point you need to start engaging in a more systematic style of coding. You can work on paper when starting with the coding, although there is no reason why you can’t start to work on computer from the word go, seeing that you can change your codes on computer at any time with relative ease. Besides, you can make backups of your coding on computer. This can be valuable if, at some stage, you discover that your initial or earlier codes work better than the new ones after all. You can then return to a previous backup without having to redo all the work that you already did.

You need to understand how the computer software that you are using works and what it can provide you with. Different software has different purposes and ways in which codes can be used. It serves no purpose claiming to have used a particular software if you do not really understand how it works, how you should use it and what it can offer you. Previous students will not always be able to teach you the software because most of the software is rewritten all the time. Rather do a formal course on the latest version of the software that you wish to use.

Summary

Most data analysis methods share a common focus.

Data analysis is simplified by coding the data and making use of dedicated computer software.

You can also use coding with simple data analysis methods, for example Microsoft Excel or a card system.

The fundamentals of data analysis apply to qualitative and quantitative research.

You should code data by focusing on the purpose of your research and the research problem statement, question or hypothesis.

The following are the fundamentals of data analysis through coding: Always:

  1. Start by engaging in close, detailed reading of a sample of your data.
  2. Read and systematically code your collection of data.
  3. Reflect on why you have done what you have done.
  4. Review and refine your codes and coding practices.
  5. Focus on what you feel are the key codes and the relationship between them.
  6. Make notes of the thinking behind why you have done what you have done.
  7. And always return to the field with the knowledge you have already gained in mind and let this knowledge modify, guide or shape the data you want to collect next.

In addition to the fundamentals, you should also adhere to the following requirements for the analysis and coding of data:

  1. Be flexible and keep an open mind.
    1. Learn how to come to objective and logical conclusions from the data that you analyse.
    1. Change your codes at any stage during your research if it becomes necessary.
    1. Develop your data analysis coding skills, confidence and diligence.
    1. Acquire a good understanding of the computer software that you will use for data analysis.
    1. Work systematically.

Close

You will use the fundamentals of data analysis and coding with most data analysis methods.

Almost all recent dedicated data analysis software use coding.

I will discuss the following analysis methods in my next seven or eight videos:

  1. Analytical induction.
  2. Biographical analysis.
  3. Comparative analysis.
  4. Content analysis.
  5. Conversation and discourse analysis.
  6. Elementary analysis.
  7. Ethnographic analysis.
  8. Inductive thematic analysis (ITA).
  9. Narrative analysis.
  10. Retrospective analysis.
  11. Schema analysis.
  12. Situational analysis.
  13. Textual analysis.
  14. Thematic analysis.
Continue Reading

ARTICLE 30: Research Methods for Ph. D. and Master’s Degree Studies: Grounded Theory

Written by Dr Hannes Nel

Introduction

Hello, I am Hannes Nel and I will discuss Grounded Theory in this post.

What is grounded theory?

Grounded theory is a type of inductive thematic analysis (ITA).

It was developed by Glaser and Strauss in the 1960s.

Glaser and Strauss supported symbolic interactionism as a philosophical perspective.

How is grounded theory used?

Grounded theory uses inductive reasoning to generate the theoretical understandings, of research by grounding the theory in the data that the researcher collected.

It is a highly systematic method for mostly studying social experiences, interactions and structures.

Grounded theory discovers, develops and provisionally verifies phenomena.

This means that the data originate in the framework for the study and should deliver logical and relevant conclusions.

Integrating grounded theory with other research and data collection methods

It is almost always necessary to use grounded theory in conjunction with one or more other research methods.

Any data collection method may be used in conjunction with grounded theory methods, bearing in mind that data collection should build on a naturalistic, interpretive philosophy.

Grounded theory methods specify analytical strategies, not data collection methods.

Grounded theory:

  1. Is a qualitative research approach.
  2. Requires an open mind, objectivity and ethical and responsible analysis of data.
  3. Is especially popular amongst those who study humanistic sciences.
  4. Can also be used for the study of non-human phenomena.

The purpose of grounded theory

The primary purpose of grounded theory is to generate theory from observations of real life.

Grounded theory aims at the discovery of regularities, the identification of categories or elements and the establishment of their connections.

Theoretical models and new theoretical concepts and arguments should be created and continuously revised as you collect and analyse data.

Grounded theory holds as a basic view that qualitative researchers do not go around testing an existing body of knowledge, but rather that they build new theory by allowing their data collection to steer their thoughts and conclusions into the unknown.

The grounded theory process

Grounded theory research should be done in a specific and well-defined context.

The research should be grounded in social reality and not be just an exercise in theorizing.

It uses a typical research process of data collection, data analysis, coming to conclusions, and formulating findings.

Findings should be transformable into formal theoretical models.

The process of collecting data is a prerequisite for analysis, while theory development should result from the analysis.

Researchers sometimes think that grounded theory is about the research process, especially data collection and analyses.

Although data collection and analysis are important research activities, the essence of grounded theory does not lie in the research process but rather in the attitude of the researcher towards the data and the purpose of the research.

It requires that each piece of the data is systematically compared with other data on the same or related issue or topic.

You should not ignore small units of text.

It just might have the potential to improve current theory and practice.

At the same time, you should not waste time with data that is clearly of no significance, because analysis is a time-consuming activity.

You can compare existing data with other existing data or with new data.

Grounded theory is based on the subjective experiences of humans.

You may also use your own experiences to understand the experiences of others.

Guard against just adopting the ideas, perceptions or models of others.

If you do this, you run the risk of just packaging old, existing knowledge differently.

Verification is a natural element of any scientific research because it strengthens the authenticity and validity of the findings and provides you with a measure of security.

Data collected should not be over-verified, because grounded theory epistemology leans strongly towards the generation of new theory rather than the analysis of existing theory.

Deconstruction can be used to lend a good measure of authenticity to the data.

Do not neglect to acknowledge the work of other researchers that you consulted and quoted.

Computer software

You can use dedicated computer programmes to arrange, compare and analyse the data that you collected.

ATLAS.Ti is an example of software that you can use.

There are a good number of others. I am just mentioning ATLAS.Ti because it is the one that I used and am familiar with.

You can easily find suitable software by just Googling for them.

Most dedicated computer programmes make use of coding.

Coding can be described as a sophisticated form of notecards like the ones that we used many decades ago.

You will create codes for salient data with most of the available relevant software.

You can also write explanatory notes in the form of memorandums.

The programme groups related codes and memorandums together.

This enables you to get a clear and holistic picture of concepts and arguments so that you can more easily come to conclusions and findings.

Your findings should be or lead to new knowledge, theories and models.

From the codes and memorandums, new theory and new theoretical models can be discovered through inductive reasoning.

You will, of course, not be required to develop new knowledge or theories on master’s degree level, but you will need to show that you understand and can apply existing knowledge and theories.

Inductive reasoning entails systematic data collection and analysis which leads to discovery, development and verification.

Most importantly, dedicated programmes substantially simplify the process of writing your research report.

Grounded theory methodology needs not to be limited to computer analysis only.

You can, for example, still use the old notecard system or you can develop your own system on a computer.

The value of grounded theory

Grounded theory enables you:

  1. To step back and critically analyse situations.
  2. To recognise the tendency towards bias.
  3. To think abstractly.
  4. To be flexible and open to helpful criticism.
  5. To be sensitive to the words and actions of respondents.
  6. To adopt a sense of absorption and devotion to the work process.

Utilizing grounded theory for research should enable you to see beyond the ordinary and to arrive at new understandings of social life.

The most important value of grounded theory is that it enables you to generate theory and to ground that theory in data.

Paradigms that can be used with grounded theory

Before we look at paradigms that can be used with grounded theory – don’t be too concerned if at this stage you do not know the paradigms.

I will discuss 29 such paradigms in later posts.

I suggest that you then return to my earlier posts on research methods to get the bigger picture.

Any paradigmatic approach can be used with grounded theory.

Mostly, however, grounded theory displays elements of post-modernism as well as symbolic interactionism.

Post-modernism lends itself to the achievement of formal theory while symbolic interactionism implies that the study is grounded in a specific empirical world.

As already mentioned, grounded theory requires elements of interpretivism as well.

There are two versions of grounded theory, namely objectivist and constructivist grounded theory.

Objectivist grounded theory is rooted in a positivist paradigmatic approach.

The objectivist viewpoint claims that it is possible to discover objective truth.

The data already exists, and you will need to discover theory from them.

Constructivist grounded theory has its roots in an interpretivist paradigmatic approach.

The constructivist viewpoint rejects the objectivist viewpoint, contending that there is no objective truth waiting to be discovered.

Constructivist grounded theory assumes that truth exists only through interaction with the realities of the world.

Meaning is, therefore, constructed rather than discovered.

Summary

The following are the elements of grounded theory:

  1. The purpose of grounded theory is to build new theory.
  2. Current theory or observation can serve as the basis for new theory.
  3. Grounded theory deals with how data and phenomena are interpreted and used rather than how they are collected.
  4. You should systematically review units of data as they become available.
  5. Any research method should utilize the philosophy behind grounded theory, meaning that any researcher should be open-minded and objective.
  6. Building new theory requires analytical induction, meaning that new theory emerges from collected data inductively through a series of steps.
  7. Grounded theory requires the development of five interrelated properties.
    • The theory must closely fit the relevant field of study in which the new theory will be used.
    • The new theory must be readily understandable to laymen concerned with the field of study.
    • The new theory must be relevant to a multitude of diverse daily situations within the focus area of the field of study.
    • New knowledge should be generalizable as widely as possible.
    • The new knowledge must allow those who use it to have enough trust in the validity and accuracy of the new knowledge, theories and models.
  8. Dedicated computer programmes enable you to discover regularities in data, to identify categories or elements and to establish their connections.
Continue Reading