Qualification - BTEC Higher National in Business
Unit Name - Statistics for Management
Unit Number - Unit 31
Assignment Title - Data Analysis, Data Insight and Presentation
Learning Outcome 1: Evaluate business and economic data/information obtained from published sources.
Learning Outcome 2: Analyse and evaluate raw business data using a number of statistical methods.
Learning Outcome 3: Apply statistical methods in business planning
Learning Outcome 4: Communicate findings using appropriate charts/tables
You work as a junior analyst in a large pharmaceutical company and have been asked by the senior data analyst (your line manager) to support with three specific tasks.
Are you finding it difficult to complete your Management assignment? Don't worry, you are in the right place. We are the top Unit 31 Statistics for Management - BTEC Higher National Diploma in Business assignment help who are providing a well-research assignment to the students.
In regard to the first task, your organisation has been approached by a health care journal editor and asked if your company can write an article on "The value of data and statistical management". This journal aims at health care professionals as well as the public.
You are tasked with researching and preparing this short piece of article which will be published in the journal and their website.
You are required to cover the following within the article:
1. What is data and why is it important? What is the relationship between data, information and knowledge providing illustrations and examples where appropriate.
The value of data and statistical management
Importance of Data and Relationship between Data, Knowledge and Information
Data is another form of information that can be represented in facts or figures to explain the cause of the problem more effectively. Data can be stored in computer and can be retrieved at any time and gain knowledge from it. Data certainly helps the organization to visualize the relationship between things that happens in different departments, locations and can make timely decisions over the problems encountered from it. Data collection plays a major role as it determines the accuracy of the data, since data wrongly entered or wrongly stored in a computer will certainly results in flaw results and hence the entire findings will be meaningless. There are different types of types and these types certainly play a major role in deciding the use of statistical techniques. The statistical methods used to analyze the categorical data is entirely different when compared to the statistical methods used to analyze the quantitative data. Data analysis is important in research since it helps the researcher to validate the data in simpler way and provide more accurate predictions. When the accuracy of the data analysis is at higher level, then, the researcher can directly interpret the results in more generalized way which certainly gives him higher level of confidence (Donald Cooper, 2006)
Information is created when data gathered are processed and organized in a structured format to provide context and meaning. Information is essentially processed data and knowledge is what we use to generate the data in structured format and providing valuable inputs
knowledge is defined as the relevant information that helps in providing interpretations and conclusions. Information is data organized into meaningful information, data was placed with relevant information, purpose and meaning. Unlike data, information plays a major role in creating knowledge, decision making and guiding further actions
The three most common types of data are numerical data, categorical data and ordinal data. Numerical data includes the measurement of any object such as height and weight of the students, temperature, and amount of rainfall, household income, and employee years of experience and so on.
The variables are grouped into four levels of measurements, namely, Categorical, OrdinalRatio, andInterval. There are two categorical scales (nominal and ordinal) and two continuous scales (Ratio and Interval)
Nominal level of measurement does not require any ordering of data and it is the first level of measurement. For example participant race (1 = Black, 2 = White and 3 = Hispanic)
Ordinal level of measurement requires specific ordering arranged either in ascending or descending order of magnitude. For example household income group, employee designation, etc
The difference between ratio scale and interval scale is that ratio scale includes absolute zero and do not contain any negative numbers. For example student height and student weight are variables measured under ratio scale
Interval level of measurement do not include zero in it and it includes both negative numbers and positive numbers (Daly, L., 2000)
2. An evaluation of the raw data published in the Centre for Health Protection regarding sedentary behavior of the people in Hong Kong.
Sedentary Behavior of the People in Hong Kong
Sedentary behavior is a kind of physical inactivity lifestyle such as reclining, ideally sitting in one position for long time or lying position. It also been found that sedentary behaviors are at higher metabolic risk even if an individual satisfies the given physical activity guidelines criteria. Lack of physical activity may certainly lead to obesity and also lead to various health disorders. Sedentary lifestyle leads to various health hazardous, namely increases the risk of mortality, increases the risk of cardiovascular diseases, diabetes and also people who tend to live sedentary lifestyle will certainly have higher risk of getting colon cancer.
Sedentary behavior of the people in Hong Kong
On comparing the relationship between gender and their time spent on sitting or reclining, it is found that, on a typical day, about 2.5% of female spend at most 120 minutes on sitting or reclining, 15% of female spend between 120 minutes and less than 240 minutes on sitting or reclining, 25.9% of female spend between 240 minutes and less than 360 minutes on sitting or reclining, 17.9% of female spend between 360 minutes and less than 480 minutes on sitting or reclining, 18.7% of female spend between 480 minutes and less than 600 minutes on sitting or reclining and 19.9% of female spend between at least 600 minutes on sitting or reclining. On the other hand, about 2.2% of male spend at most 120 minutes on sitting or reclining, 16.2% of male spend between 120 minutes and less than 240 minutes on sitting or reclining, 25.7% of male spend between 240 minutes and less than 360 minutes on sitting or reclining, 16.6% of male spend between 360 minutes and less than 480 minutes on sitting or reclining, 17.6% of male spend between 480 minutes and less than 600 minutes on sitting or reclining and 21.6% of male spend between at least 600 minutes on sitting or reclining
This certainly shows that about 62.5% of female spend between 204 minutes and 600 minutes on sitting or reclining while only 59.9% of male spend between 204 minutes and 600 minutes on sitting or reclining(Centre for Health Protection, Department of Health - sedentary behavior)
Regarding age group, about 1.8% of the people aged between 15 years and 24 years spend at most 120 minutes on sitting or reclining, 9.1% of the people aged between 15 years and 24 years spend between 120 and less than 240 minutes on sitting or reclining, 18.8% of the people aged between 15 years and 24 years spend between 240 and less than 360 minutes on sitting or reclining, 19.5% of the people aged between 15 years and 24 years spend between 360 and less than 480 minutes on sitting or reclining, 25.3% of the people aged between 15 years and 24 years spend between 480 and less than 600 minutes on sitting or reclining and 25.7% of the people aged between 15 years and 24 years spend between at least 600 minutes on sitting or reclining. About 1.7% of the people in age group 25 and 34 spend at most 120 minutes on sitting or reclining, 14.8% of the people in age group 25 and 34 spend between 120 and less than 240 minutes on sitting or reclining, 20.4% of the people in age group 25 and 34 spend between 240 and less than 360 minutes on sitting or reclining, 15.3% of the people in age group 25 and 34 spend between 360 and less than 480 minutes on sitting or reclining, 21.9% of the people in age group 25 and 34 spend between 480 and less than 600 minutes on sitting or reclining and 26% of the people in age group 25 and 34 spend between at least 600 minutes on sitting or reclining.
About 2.7% of the people in age group 25 and 34 spend at most 120 minutes on sitting or reclining, 17.8% of the people in age group 25 and 34 spend between 120 and less than 240 minutes on sitting or reclining, 26.8% of the people in age group 25 and 34 spend between 240 and less than 360 minutes on sitting or reclining, 13.8% of the people in age group 25 and 34 spend between 360 and less than 480 minutes on sitting or reclining, 19% of the people in age group 25 and 34 spend between 480 and less than 600 minutes on sitting or reclining and 19.8% of the people in age group 25 and 34 spend between at least 600 minutes on sitting or reclining.
This study findings suggest that about 62.5% of female spend between 204 minutes and 600 minutes on sitting or reclining while only 59.9% of male spend between 204 minutes and 600 minutes on sitting or reclining. This indicates that female tend to spend more Sedentary behavior
Also, age related findings suggest that Time spent on sitting or reclining includes time spent on sitting on people aged 55 years and above. This is quite clear that elderly people tend to spend more time on rest when compared with the middle and young aged people in Hong Kong
3. A critical evaluation of different methods of data analysis including descriptive, exploratory and confirmatory methods.
Part 3. Critical Evaluation of different Methods of Data Analysis
The commonly used data analysis techniques areDescriptive Analysis, Exploratory Data Analysis, and Confirmatory Data Analysis.
Descriptive data analysis is a kind of data analysis technique which helps the researcher to describe and summarize the data in a constructive way. It is very helpful in describing the patterns that might emerge in fulfilling every condition (Fullerton, J. A., 2016)
» It helps the management in the company with more informative decisions and helps them to guide their business in the right direction
» The patterns that are hidden in raw data was appropriately revealed by this technique and thus enabling managers to have a view on their business performance and make corrective actions as and when required
» Descriptive studies are not statistically proven studies
» Research results may have certain kind of bias due to absence of applying statistical tests
» Since descriptive studies are observational kind of nature, it cannot be repeated again
Exploratory data analysis is considered as the first part of the data analysis process in which the data was visualized and easily represented such that even a layman can understand the researchers view and thoughts. It also guides the researcher to frame the questions properly with proper presentation and manipulating the data which has some important insights in it
In Exploratory analysis, the descriptive statistics involves inductive approach as it looks for different possible ways to validate the data without any fixed logic. It mainly focus on validating the assumptions like normality assumption, equality of variance assumption and it highly depends on visual representations
» More flexible options available to construct hypotheses
» The statements generated from the results are more accurate and realistic
» It helps in deep understanding of the process
» Statistical learning
» It usually do not provide any definitive answers
» Requires more judgment skills and originality
Confirmatory Data Analysis uses traditional statistical tool to evaluate the evidence in the data. It normally uses significance, inference, and confidence interval measure to provide general interpretation about the population based on the information generated from the sample. It uses various statistical tools like hypothesis testing, using a specified level of precision and producing estimates, variance analysis and regression model techniques, etc
In confirmatory analysis, the inferential statistics involves deductive approach and it heavily rely on probability models and it accepts untestable assumptions. Confirmatory factor analysis looks for definite answers to specific questions and it highly depends on numerical calculations. Hypothesis testing procedures and formal confidence interval estimation were commonly used in this analysis
» Provide accurate information in the specific circumstances
» The theory and method used in this analysis are well established
» Precision leads to misleading results in some circumstances
» Analysis are mainly performed usingpre-determinedlogics
» Difficult to interpret and notice unexpected findings
The most appropriate data analysis technique used in our study is descriptive data analysis technique. It is very helpful in describing the patterns that might emerge in fulfilling every condition
Representation of Data
Visual representation of the data seems to be very easy and effective way of conveying the results. Visual representation of the data would be easy for the layman to understand the information conveyed by the researcher. Data can be stored in computer and can be retrieved at any time and gain knowledge from it. Data certainly helps the organization to visualize the relationship between things that happens in different departments, locations and can make timely decisions over the problems encountered from it. Data collection plays a major role as it determines the accuracy of the data, since data wrongly entered or wrongly stored in a computer will certainly results in flaw results and hence the entire findings will be meaningless.
Have you stuck with your Diploma homework? Get our faultless Diploma assignment help and finish your work without any stress!
Task 2: Data Insight
Your second task is to produce an analysis of raw data and communicate findings appropriately. Specifically:
1. Discuss the differences between descriptive and inferential data.
Differences between Descriptive and Inferential data:
Descriptive statistics was normally used to have a visual representation of data that helps to describe the dataset characteristics. It is used to explain both quantitative observations (called as summary statistics) and also the overall data insights. Descriptive statistics was used to describe the distribution of both population data and also for individual sample and the major advantage of descriptive statistics is that it is merely explanatory and therefore it is not heavily worried between the population and sample data. Examples of descriptive statistics are central tendency, variability and frequency distribution (Roscoe, J., 1975)
The mean is the most appropriate measure of central tendency when the distribution is normally distributed. It is calculated by dividing the sum of n values in the dataset by the total number of values
Average or Mean = x ¯=(∑x)/n
Descriptive statistics was normally used to assess the distribution of the variables in the dataset. When the distribution is normally distributed, then the mean value is equal to the median value. When the distribution is positively skewed, then mean seems to be larger than the median and when the distribution is negatively skewed, and then the mean seems to be smaller than the median value. Descriptive statistics are used to describe or summarize the sample data characteristics or about the dataset like mean, standard deviation and range. In simple words, descriptive statistics was used to explain the data in simple way. For example, when the data represents the color of the eyes, then it would be very useful to represent the data in frequency and percentages while the data representing the height and weight of the samples was represented by mean and standard deviation
Inferential statistics was normally used to draw interpretations of the population from the information generated from the sample which cannot be taken out from descriptive statistics. The main use of inferential statistics is to draw conclusions make predictions based on the information generated from the sample data(Fullerton, J. A., 2016)
Inferential statistics consists of hypothesis testing procedures and confidence interval estimation. It is normally uses the sample data to derive certain results and based on the sample findings it determines the population parameters. For example, the researcher computes the confidence interval for the difference in salary between the male and female employees who are working in the same designation in an organization. If the confidence interval contains the value 0, then it is concluded that there is no difference in the mean salaries across gender. If the value 0 does not fall in the estimated confidence interval, then, we can say that mean salary for male and female shows a significant difference(Morsanyi, K., 2016).
Descriptive statistics normally summarizes the dataset characteristics
Inferential statistics helps the researcher to test whether the data taken into consideration is generalizable to the larger population
Graphical representation of data
To determine whether situation are unusual or whether they happened
Converting the large form of data into tables
Determining the reliability of the numerical estimates
Preparation of summary measures which certainly helps to know how the dataset was framed and determine the extreme points
Predicting the future trends using the data taken from past occurrence
For example: Frequency table, Descriptive statistics table, bar chart and pie chart
For example: Hypothesis testing procedures
It is widely used to describe the distribution of the data
It is used to know the chance of occurrence of an event
It organizes, analyzes and presents the data in a productive way
It normally compares the data between groups, uses hypothesis testing procedures and predicts the future trends
Get Business Assignment Help From Our Assignment Experts To Score Top Grades!!
2. Distinguish between and evaluate the different forms of probability distribution including normal, poisson and binomial.
Different form of Probability Distribution
The probability distribution was normally classified as discrete probability distribution and continuous probability distribution
The discrete probability distribution are Binomial and Poisson while the continuous probability distributions are negative binomial, exponential distribution, Log-normal and normal. For example: height, weight, and individual's income (Morsanyi, K., 2016).
Discrete Probability Distribution
It was normally used to represent the number of success in n trials. Here, the probability of success is constant over the entire experiment. For example, the chance of getting heads in tossing a coin three times
The probability mass function of Binomial distribution is
P (X = x) = nCxpx (1 - p)n-x, x = 0, 1, .... n
The Poisson distribution was normally used to determine the number of rare events that happens in a given point of time. For example, the number of accidents in the nation highway road between 10 am and 12 pm on a certain day
The probability mass function of Poisson distribution is
P (X=x)=(e^(-λ) λ^x)/x!,x = 0,1,...∞
Uniform distribution, also called as rectangular distribution. The probability density function is
A normal distribution is an example of continuous probability distribution which resembles like a bell shape. It is symmetrical indicating that the mean lies at the peak of the curve which shows that nearly 50% of the data falls on the left of mean and 50% of the data falls on the right of the mean. One of the major properties of normal distribution is that the mean, median and mode coincides and the curve is symmetric at the center. The empirical rule of normal probability distribution is that 68% of the data falls within one standard deviation from the mean , 95% of the data falls within two standard deviation from the mean and 99.7% of the data falls within three standard deviation from the mean The probability density function of normal distribution is
f(X=x)=1/(σ√2π) e^(-1/2 ((x-μ)/σ)^2 )
Median is the most appropriate measure of central tendency when the distribution is either right skewed or left skewed. Also, for skewed distributions standard deviation cannot be used as a measure of dispersion. Thus, interquartile range should be used as a measure of dispersion for skewed data. The interquartile range is calculated by taking the difference between the third quartile and the first quartile and it represents the middle 50% of the data
3. Your company wants to study the effect of a concentration skills course for students at the age of 10. Your colleagues tested the concentration scores of students three months before and again three months after the course.
Your researchers want to know if there is any differences in the concentration skills after the course. Use the result below to calculate a range of descriptive and inferential statistics. Apply and justify the use of different methods, e.g. T-Test, ANOVA testing, chi-square testing.
Relationship between Treatment and Concentration Scores
The main objective of this study data is to determine whether concentration skills course is effective among students aged 10. For the purpose of this study, the students enrolled for the study were given concentration test before the course starts and after three months of the course and the scores of the two tests were recorded. The descriptive statistics were given below
Test Score of students before the course
Standard Deviation=s=√((∑(x-x ¯ )^2 )/(n-1))
Test Score of students after the course :Average=x ¯=(∑x)/n
Standard Deviation=s=√((∑(x-x ¯ )^2 )/(n-1))
Test Score of student's differences (Before - After)
Standard Deviation=s=√((∑(x-x ¯ )^2 )/(n-1))
The table given below shows the descriptive statistics for the three variables
Difference (Before - After), d
From the above table, it is seen that the mean concentration test scores before enrolling in the course is 116.316 with a deviation of 11.676 and the median centration test scores before enrolling in the course 120. The median value indicates that nearly 50% of the concentration test scores before enrolling in the course falls below 120 and the concentration test scores before enrolling in the course falls above 120. The recorded minimum and maximum centration test scores before enrolling in the course is 94 and 113 respectively.
The mean concentration test scores three months after the training program is 104.263 with a deviation of 15.249 and the median centration test scores three months after the training program 107. The median value indicates that nearly 50% of the concentration test scores three months after the training program falls below 107 and the concentration test scores three months after the training program falls above 107. The recorded minimum and maximum centration test scores three months after the training program is 72 and 125 respectively.
The mean concentration test scores differences is 12.053 with a deviation of 12.389 and the median centration test scores three months after the training program 11. The median value indicates that nearly 50% of the concentration test scores differences before and after participating in the course falls below 11 and the concentration test scores differences before and after participating in the course falls above 11. The recorded minimum and maximum centration test scores three months after the training program is - 11 and 31 respectively. There are two negative values indicating that two student's concentration test scores after participating in the course were high when compared with the concentration test scores before participating in the course. So, special attention seems to be required for those two students and special training program also required to mold them and help them to overcome from their concentration difficulties
Here, paired t test was used to determine whether the concentration skills course is effective among students aged 10, and the null and alternative hypotheses is given below
Null Hypothesis: H0: µd = 0
That is, the mean difference of test scores before and after the concentration course do not differ from zero
Alternative Hypothesis: Ha: µd ≠ 0
That is, the mean difference of test scores before and after the concentration course differ from zero
Let the level of significance be α = 0.05
The t test statistic is given below
The p - value of t test statistic corresponding to 18 degrees of freedom is 0.0005
Statistical Decision: Reject the null hypothesis at 5% level of significance
HND assignment must be written perfectly; otherwise, you will not be able to receive A+ grades. As a result, it is necessary to seek the best HND assignment help. We charge reasonable fees for our services and do not skimp on the solution's quality.
Task 3: Powerpoint Preparation
Your final task as Junior Analyst is to present how statistical methods in business planning can be applied, with particular consideration to issues of variability and probability. The intention is this will form the basis of a presentation at the annual staff conference as part of ongoing Continual Professional Development (CPD) activity. This is to be done in the form of powerpoint slides which will:
1. Illustrate the importance of variability and probability in applying and evaluating statistical methods in business planning.
Variability (also known as spread or dispersion) normally explains the distance between the each individual value in the dataset from its mean
variability helps the researcher to describe the deviation of the datasets and helps them to compare their data with other sets of data (Donald Cooper, 2006)
While the central tendency, explains the point position where most of the data values fall, variability was normally used to summarize the deviation of each data value from its mean
Range - Measures of Variability
Range is a simple measure of variation as it is calculated by taking the difference between the maximum value and minimum value in the dataset (Donald Cooper, 2006)
For example, let us consider that the range of Drug 1 is 40 (90 - 50) and the range of Drug 2 is 20 (75 - 55)
On comparing the range values between the two drugs, it is seen that the variability is high in Drug 1 when compared to that of Drug 2
Range = Maximum - Minimum
Interquartile Range - Measures of Variability
When the data is skewed, then, interquartile range is the appropriate measure for dispersion (Daly, L., 2000)
Interquartile Range = IQR = Q3 - Q1
Here, Q3 → Third Quartile and Q1 → First Quartile
For example, let us consider the selling price of 10 houses (prices in 1000 dollars)
Data arranged in ascending order: 55, 56, 65, 78, 79, 79, 89, 110, 118, and 128
Probability is defined as the chance of occurrence of a random event
It is always positive and falls between 0 and 1
The sum of the probability of all events equal to 1
The probability of impossible event is 0 (Daly, L., 2000)
The three types of probability are
Relative Frequency, and
Probability - Mean and Variance
The average in probability was calculated using the formula given below
E (X) = ∑(x * P (x))
In probability theory, variation is calculated by using the formula given below
Variance = E (X2) - [E (X)]2
2. Explore the use of statistical process control and appropriate techniques in application to operations, e.g. inventory, flowtime, quality, capacity.
Statistical Process Control
Statistical process control is a kind of statistical technique that was normally used to control a process or production technique. It helps the researcher to monitor the process behavior by identifying the issues that occurs in internal systems and provide a solution for the production errors (Fullerton, J. A., 2016). The major steps involved in statistical process control are
Statistical Techniques to Operational Management
The quality characteristics observed in industry may be generally classified into any of the following categories
Directly Measurable (Variables)
(Tyre Thickness, Tube light life length, temperature, tensile strength etc., will provide data in terms of interval scale of measurements)
Non - Measurable (Attributes)
(Cracks, Breakages, Assembly defects, etc., provides values related to discrete data)
Statistical Quality control contains the most important word called Qualitywhen a produced product satisfies the implied needs and requirements of the customer then the quality of the product is said to reach its satisfactory level
A person who uses the product is called as customer or consumer
Quality is a relative term and is used generally with reference to the end use of the product.
Since a product is mainly designed for the customer use, it should certainly satisfy the customer requirements and that will dictate the quality of the product.
Causes of Variation
The variations affecting a production process are broadly classified as being due to two causes
Chance causes (common causes/random causes)
Chance causes are random causes which certainly behave in random manner. It cannot be either prevented or eliminated in any ways
Assignable causes(non random causes or special causes)
Assignable causes are generally few in number and can be easily detected
Inventory in Operations Management
Below listed options are ideal for an organization to maintain their inventory levels
To maintain independence of operations
To meet variation in product demand
To allow flexibility in product scheduling
To furnish a safeguard for variation in raw material delivery To take benefit of economic purchase order size
Many other domain-specific reasons
The size of the inventory depends on
Holding costs or carrying costs
Setup cost or manufacturing cost
Cost of ordering
The most popular inventory models are
A single period inventory model (Overbooking of airline flights, any type of one time order)
Multi period inventory systems
Statistical process control are very widely used to study the changes that occur in the process or production method over a period of time
There are two common types of variation, namely
Common Cause or Internal Process Variation
Special Cause or variation that happens due to assignable reason
Every process has 2 types of variation
Inherent process variation (common cause)
The most commonly used statistical process control charts for quantitative data are
Individual and Moving Range Chart, and
X-bar and Range Chart
The most commonly used statistical process control charts for qualitative data are
C Chart, and
The main advantage of using control charts are
It is very helpful in reporting long term capability
It shows high insight on the process movement
It focus more on quality and consistency
Xbar and Range Chart
Let us consider the Bowling Scenario. The player is interested in improving his Bowling Game
One approach would be plot the score in each game and this would certainly helps him to identify his weak and strong positions in the game
If the player is interested in the average score, then, plot the average of three game he plays each night game for next seven days.
To be more consistent in the bowling game, plot the range in scores for the three games each night
Statistical Techniques to Operational Management
The process status was seen visually using statistical process control charts and it is also called as time - ordered graph
The variation in the data was seen clearly by plotting these data points on the graph and the expected range also can be easily identified
Thus special occurrence can be seen when the plotted points fall outside the control limits and it needs immediate investigation and correction.
When all the plotted points fall within the lower and upper control limits, then the process is under control (Fullerton, J. A., 2016)
When one or more data points fall either below lower control limits or upper control limits, then it is found that the process is out of control.
When seven or more data points moves either in increasing or decreasing trend consecutively, then the process is considered to be out of control
3. Make recommendations and judgments on how statistical process control can improve business performance and business planning.
Improving Business Performance
Statistical process control do not eliminate variation, but it helps the management to track special cause variation. By using the control charts and run charts, the manufacturer can track the variation in the process and certainly helps him in finding the place where the error happens and gives him adequate time to correct and monitor the process
Thus, it certainly helps in minimizing the rework and helps to streamline all process ensuring in error free production. This certainly means that statistical process control helps in improve the quality level of the product or process and also warns the manufacturer to streamline the process at the initial stage itself instead of rework a process entirely
The major benefits of statistical process control are
It improves business performance by reducing scrap, rework and also warranty claims and thus maximizes the productivity
It also helps the management to improvise resource utilization and increases operational efficiency which in turn decreases manual inspection frequency
The cost of wasted raw materials was highly minimized which results in higher profit and high customer satisfaction
The mean concentration test scores before enrolling in the course is 116.316 with a deviation of 11.676. The mean concentration test scores three months after the training program is 104.263 with a deviation of 15.249. On comparing the mean values, it is seen that the mean concentration test scores after the course is less when compared to that of mean concentration test score before the course and it clearly indicates that the concentration skills course is effective among students aged 10