Search results for: statistical package IBM SPSS 20
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1545

Search results for: statistical package IBM SPSS 20

135 Rail Corridors between Minimal Use of Train and Unsystematic Tightening of Population: A Methodological Essay

Authors: A. Benaiche

Abstract:

In the current situation, the automobile has become the main means of locomotion. It allows traveling long distances, encouraging urban sprawl. To counteract this trend, the train is often proposed as an alternative to the car. Simultaneously, the favoring of urban development around public transport nodes such as railway stations is one of the main issues of the coordination between urban planning and transportation and the keystone of the sustainable urban development implementation. In this context, this paper focuses on the study of the spatial structuring dynamics around the railway. Specifically, it is a question of studying the demographic dynamics in rail corridors of Nantes, Angers and Le Mans (Western France) basing on the radiation of railway stations. Consequently, the methodology is concentrated on the knowledge of demographic weight and gains of these corridors, the index of urban intensity and the mobility behaviors (workers’ travels, scholars' travels, modal practices of travels). The perimeter considered to define the rail corridors includes the communes of urban area which have a railway station and communes with an access time to the railway station is less than fifteen minutes by car (time specified by the Regional Transport Scheme of Travelers). The main tools used are the statistical data from the census of population, the basis of detailed tables and databases on mobility flows. The study reveals that the population is not tightened along rail corridors and train use is minimal despite the presence of a nearby railway station. These results lead to propose guidelines to make the train, a real vector of mobility across the rail corridors.

Keywords: Coordination between urban planning and transportation, Rail corridors, Railway stations, Travels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125
134 The Study of Internship Performances: Comparison of Information Technology Interns towards Students’ Types and Background Profiles

Authors: Shutchapol Chopvitayakun

Abstract:

Internship program is a compulsory course of many undergraduate programs in Thailand. It gives opportunities to a lot of senior students as interns to practice their working skills in the real organizations and also gives chances for interns to face real-world working problems. Interns also learn how to solve those problems by direct and indirect experiences. This program in many schools is a well-structured course with a contract or agreement made with real business organizations. Moreover, this program also offers opportunities for interns to get jobs after completing it from where the internship program takes place. Interns also learn how to work as a team and how to associate with other colleagues, trainers, and superiors of each organization in term of social hierarchy, self-responsibility, and self-disciplinary. This research focuses on senior students of Suan Sunandha Rajabhat University, Thailand whose studying major is information technology program. They practiced their working skills or took internship programs in the real business sector or real operating organizations in 2015-2016. Interns are categorized in to two types: normal program and special program. For special program, students study in weekday evening from Monday to Friday or Weekend and most of them work full-time or part-time job. For normal program, students study in weekday working hours and most of them do not work. The differences of these characters and the outcomes of internship performance were studied and analyzed in this research. This work applied some statistical analytics to find out whether the internship performance of each intern type has different performances statistically or not.

Keywords: Internship, intern, senior student, information technology program.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1243
133 Evaluation of Stormwater Quantity and Quality Control through Constructed Mini Wet Pond: A Case Study

Authors: Y. S. Liew, K. A. Puteh Ariffin, M. A. Mohd Nor

Abstract:

One of the Best Management Practices (BMPs) promoted in Urban Stormwater Management Manual for Malaysia (MSMA) published by the Department of Irrigation and Drainage (DID) in 2001 is through the construction of wet ponds in new development projects for water quantity and quality control. Therefore, this paper aims to demonstrate a case study on evaluation of a constructed mini wet pond located at Sekolah Rendah Kebangsaan Seksyen 2, Puchong, Selangor, Malaysia in both stormwater quantity and quality aspect particularly to reduce the peak discharge by temporary storing and gradual release of stormwater runoff from an outlet structure or other release mechanism. The evaluation technique will be using InfoWorks Collection System (CS) as the numerical modeling approach for water quantity aspect. Statistical test by comparing the correlation coefficient (R2), mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) were used to evaluate the model in simulating the peak discharge changes. Results demonstrated that there will be a reduction in peak flow at 11 % to 15% and time to peak flow is slower by 5 minutes through a wet pond. For water quality aspect, a survey on biological indicator of water quality carried out depicts that the pond is within the range of rather clean to clean water with the score of 5.3. This study indicates that a constructed wet pond with wetland facilities is able to help in managing water quantity and stormwater generated pollution at source, towards achieving ecologically sustainable development in urban areas.

Keywords: Wet pond, Retention Facilities, Best Management Practices (BMP), Urban Stormwater Management Manual for Malaysia (MSMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2517
132 An Obesity Index Derived from Waist and Hip Circumferences Well-Matched with Other Indices in Children with Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Indices derived from anthropometric measurements [waist-to-hip ratio (WHR)] or body fat mass compositions [trunk-to-leg fat ratio (TLFR)] are used for the evaluation of obesity. The best for clinical practices is still being investigated. The aim of this study is to derive an index, which best suits the purpose for the discrimination of children with normal body mass index (N-BMI) from obese (OB) children. 83 children participated in the study. Groups 1 and 2 comprised 42 children with N-BMI and 41 OB children, whose age- and sex-adjusted BMI percentile values vary between 15-85 and 95-99, respectively. The institutional ethics committee approved the study protocol. Informed consent forms were filled by the parents of the participants. Anthropometric measurements (weight, height (Ht), waist circumference (WC), hip circumference (HC), neck circumference (NC) values) were taken. BMI, WHR, (WC+HC)/2, WC/Ht, (WC/HC)/Ht, WC*NC were calculated. Bioelectrical impedance analysis was performed to obtain body’s fat compartments in terms of total fat, trunk fat, leg fat, arm fat masses. TLFR, trunk-to-appendicular fat ratio (TAFR), (trunk fat+leg fat)/2 ((TF+LF)/2), fat mass index (FMI) and diagnostic obesity notation model assessment-II (D2I) index values were calculated. Statistical analysis was performed. Significantly higher values of (WC+HC)/2, (TF+LF)/2, D2I and FMI were observed in OB group than N-BMI group. Significant correlations were found between BMI and WC, (WC+HC)/2, (TF+LF)/2, TLFR, TAFR, D2I, FMI in both groups. Similar correlations were obtained for WC. (WC+HC)/2 was correlated with TLFR, TAFR, (TF+LF)/2, D2I and FMI in N-BMI group. In OB group, the correlations were the same except those with TLFR and TAFR. These correlations were not present with WHR. Correlations were observed between TLFR as well as TAFR and BMI, WC, (WC+HC)/2, (TF+LF)/2, D2I, FMI in N-BMI group. In OB group, correlations between TLFR or TAFR and BMI, WC as well as (WC+HC)/2 were missing. None was noted with WHR. In conclusion, the only correlation valid in both groups was that exists between (TF+LF)/2 and (WC+HC)/2, which was suggested as a link between fat-based and anthropometric indices. (WC+HC)/2, but not WHR, was much more suitable as an anthropometric obesity index.

Keywords: Children, hip circumference, obesity, waist circumference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 409
131 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
130 Impact of Standardized Therapeutic Hypothermia Protocol on Neurological Performance after Resuscitation from Cardiac Arrest

Authors: Tahsien Mohamed Okasha, Warda Youssef Mohamed Morsy, Hanan Elsayed Zaghla

Abstract:

We hypothesized that post cardiac arrest patients with Glasgow Coma Scale (GCS) score of less than 8 and who will be exposed to therapeutic hypothermia protocol will exhibit improvement in their neurological performance. 17 subjects were enrolled in this study all over one year. The study was carried out using Quasi-experimental research design. Four tools were used for data collection of this study: Demographic and medical data sheet, Post cardiac arrest health assessment sheet, Bedside Shivering Assessment Scale (BSAS), and Glasgow Pittsburgh cerebral performance category scale (CPC). The mean age was X̅ ± SD = 53 ± 8.122 years, 47.1% were arrested because of cardiac etiology. 35.3% subjects were initially arrested in form of ventricular tachycardia (VT), 23.5% initially arrested in form of ventricular fibrillation (VF), and 29.4% in form of A-Systole. Favorable neurological outcome was seen among 70.6%. There was significant statistical difference in WBC, Platelets, blood gases value, random blood sugar. Also, initial arrest rhythm, etiology of cardiac arrest, and shivering status were significantly correlated with cerebral performance categories score. Therapeutic hypothermia has positive effects on neurological performance among post cardiac arrest patients with GCS score of less than 8. Replication of the study on larger probability sample, with randomized control trial design is recommended with further study for suggesting nursing protocol for patients undergoing therapeutic hypothermia is recommended.

Keywords: Therapeutic hypothermia, neurological performance, after resuscitation from cardiac arrest, initial arrest rhythm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 273
129 Organizational Commitment of Anadolu University Open Education Faculty Students

Authors: Emine Demiray, Şensu Curabay

Abstract:

Distance education program is a dimension of contemporary and new education technologies. Concepts and applications in this field are the results of a series of educational demands and developments in various communication and education technologies. Distance education applications have some conceptual bases. These are creating new education opportunities, realizing work-education unity, getting democratic in education, lifelong education, tendency to individual matters, effective use of institutions, integration of technology and education, tendency to individual and social needs, taking three dimensional integration as the main principle (publishing, printed materials and face to face education), reaching maximum mass, individual and mass education integrity and education demand and financial matters balance. Economics, Business Administration and Open Education faculties, which have been giving education within Anadolu University since 1982 in Turkey, are carrying on education with nearly 1.000.000 students. The aim of this study is to determine organizational commitment levels of students who have been studying at Anadolu University Economics, Business Administration and Open Education faculties in the scope of affective, continuance and nominative commitment in Allen&Meyer model. In the study, organizational commitment of the Economics, Business Administration and Open Education faculty students, who are receiving education by means of distance education, to their faculties is dealt after introducing Anadolu University Distance Education system which gives higher education via distance education method in Turkey. In order to increase the success level of faculties it is required for students to have high level of organizational commitment to their faculties. A questionnaire has been applied by using “Organizational Commitment Scale", developed by Meyer&Allen to determine organizational commitments of Economics, Business Administration and Open Education students. Organizational commitment is dealt with as affective, continuance and nominative commitment. The questionnaire was applied face to face to randomly chosen 500 students living in Eskişehir and the data was downloaded to the computer by using SPSS program and the results were analyzed in terms of demographic features (gender, age, marital status, years of study, work and income level) of students by using frequency test, ttest and ANOVA test. As a result of these analyses, when the comments of Open Education Faculty students on levels of affective, continuance and nominative commitment to their faculties were examined, it has been revealed that continuance commitment level has the highest rate. Among the female participants; continuance commitment is high in the age range of 30-40, for normative commitment it is 17-22. However no dominant age range was defined for affective commitment. Regarding the marital status; continuance commitment average is higher among married participants; but nominative affective commitment average is higher among single participants. As to the years of study, affective and continuance commitment is higher among senior students while normative commitment is higher among junior students. Moreover; in terms of continuance, affective and normative commitment, those who do not work and have low income have higher level of all there commitment types than those who work and have relatively high income.

Keywords: Open education, Organizational commitment, Distance education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
128 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 862
127 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
126 Body Composition Response to Lower Body Positive Pressure Training in Obese Children

Authors: Basant H. El-Refay, Nabeel T. Faiad

Abstract:

Background: The high prevalence of obesity in Egypt has a great impact on the health care system, economic and social situation. Evidence suggests that even a moderate amount of weight loss can be useful. Aim of the study: To analyze the effects of lower body positive pressure supported treadmill training, conducted with hypocaloric diet, on body composition of obese children. Methods: Thirty children aged between 8 and 14 years, were randomly assigned into two groups: intervention group (15 children) and control group (15 children). All of them were evaluated using body composition analysis through bioelectric impedance. The following parameters were measured before and after the intervention: body mass, body fat mass, muscle mass, body mass index (BMI), percentage of body fat and basal metabolic rate (BMR). The study group exercised with antigravity treadmill three times a week during 2 months, and participated in a hypocaloric diet program. The control group participated in a hypocaloric diet program only. Results: Both groups showed significant reduction in body mass, body fat mass and BMI. Only study group showed significant reduction in percentage of body fat (p = 0.0.043). Changes in muscle mass and BMR didn't reach statistical significance in both groups. No significant differences were observed between groups except for muscle mass (p = 0.049) and BMR (p = 0.042) favoring study group. Conclusion: Both programs proved effective in the reduction of obesity indicators, but lower body positive pressure supported treadmill training was more effective in improving muscle mass and BMR.

Keywords: Children, Hypocaloric diet, Lower body positive pressure supported treadmill, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4312
125 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1015
124 Computational Methods in Official Statistics with an Example on Calculating and Predicting Diabetes Mellitus [DM] Prevalence in Different Age Groups within Australia in Future Years, in Light of the Aging Population

Authors: D. Hilton

Abstract:

An analysis of the Australian Diabetes Screening Study estimated undiagnosed diabetes mellitus [DM] prevalence in a high risk general practice based cohort. DM prevalence varied from 9.4% to 18.1% depending upon the diagnostic criteria utilised with age being a highly significant risk factor. Utilising the gold standard oral glucose tolerance test, the prevalence of DM was 22-23% in those aged >= 70 years and <15% in those aged 40-59 years. Opportunistic screening in Australian general practice potentially can identify many persons with undiagnosed type 2 DM. An Australian Bureau of Statistics document published three years ago, reported the highest rate of DM in men aged 65-74 years [19%] whereas the rate for women was highest in those over 75 years [13%]. If you consider that the Australian Bureau of Statistics report in 2007 found that 13% of the population was over 65 years of age and that this will increase to 23-25% by 2056 with a further projected increase to 25-28% by 2101, obviously this information has to be factored into the equation when age related diabetes prevalence predictions are calculated. This 10-15% proportional increase of elderly persons within the population demographics has dramatic implications for the estimated number of elderly persons with DM in these age groupings. Computational methodology showing the age related demographic changes reported in these official statistical documents will be done showing estimates for 2056 and 2101 for different age groups. This has relevance for future diabetes prevalence rates and shows that along with many countries worldwide Australia is facing an increasing pandemic. In contrast Japan is expected to have a decrease in the next twenty years in the number of persons with diabetes.

Keywords: Epidemiological methods, aging, prevalence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946
123 Detection of Lard in Binary Animal Fats and Vegetable Oils Mixtures and in Some Commercial Processed Foods

Authors: H. A. Al-Kahtani, A. A. Abou Arab, M. Asif

Abstract:

Animal fats (camel, sheep, goat, rabbit and chicken) and vegetable oils (corn, sunflower, palm oil and olive oil) were substituted with different proportions (1, 5, 10 and 20%) of lard. Fatty acid composition in TG and 2-MG were determined using lipase hydrolysis and gas chromatography before and after adulteration. Results indicated that, genuine lard had a high proportion (60.97%) of the total palmitic acid at 2-MG. However, it was 8.70%, 16.40%, 11.38%, 10.57%, 29.97 and 8.97% for camel, beef, sheep, goat, rabbit and chicken, respectively. It could be noticed also the position-2-MG is mostly occupied by unsaturated fatty acids among all tested fats except lard. Vegetable oils (corn, sunflower, palm oil and olive oil) revealed that the levels of palmitic acid esterifies at 2-MG position was 6.84, 1.43, 9.86 and 1.70%, respectively. It could be observed also the studied oils had a higher level of unsaturated fatty acids in the same position, compared with animal fats under investigation. Moreover, palmitic acid esterifies at 2-MG and PAEF increased gradually as the substituted levels increased among all tested fat and oil samples. Statistical analysis showed that the PAEF correlated well with lard level. The detection of lard in some commercial processed foods (5 French fries, 4 Butter fats, 5 processed meat and 6 candy samples) was carried out. Results revealed that 2 samples of French fries and 4 samples of processed meat contained lard due to their higher PAEF, while butter fat and candy were free of lard.

Keywords: Lard, adulteration, PAEF, goat, triglycerides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2937
122 Food Security in the Middle East and North Africa

Authors: Sara D. Garduño-Diaz, Philippe Y. Garduño-Diaz

Abstract:

To date, one of the few comprehensive indicators for the measurement of food security is the Global Food Security Index (GFSI). This index is a dynamic quantitative and qualitative benchmarking model, constructed from 28 unique indicators, that measures drivers of food security across both developing and developed countries. Whereas the GFSI has been calculated across a set of 109 countries, in this paper we aim to present and compare, for the Middle East and North Africa (MENA), 1) the Food Security Index scores achieved and 2) the data available on affordability, availability, and quality of food. The data for this work was taken from the latest available report published by the creators of the GFSI, which in turn used information from national and international statistical sources. MENA countries rank from place 17/109 (Israel, although with resent political turmoil this is likely to have changed) to place 91/109 (Yemen) with household expenditure spent in food ranging from 15.5% (Israel) to 60% (Egypt). Lower spending on food as a share of household consumption in most countries and better food safety net programs in the MENA have contributed to a notable increase in food affordability. The region has also, however, experienced a decline in food availability, owing to more limited food supplies and higher volatility of agricultural production. In terms of food quality and safety the MENA has the top ranking country (Israel). The most frequent challenges faced by the countries of the MENA include public expenditure on agricultural research and development as well as volatility of agricultural production. Food security is a complex phenomenon that interacts with many other indicators of a country’s wellbeing; in the MENA it is slowly but markedly improving.

Keywords: Diet, food insecurity, global food security index, nutrition, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3982
121 Women's Employment Issues in Georgia and Solutions Based on European Experience

Authors: N. Damenia, E. Kharaishvili, N. Sagareishvili, M. Saghareishvili

Abstract:

Women's Employment is one of the most important issues in the global economy. The article discusses the stated topic in Georgia, through historical content, Soviet experience, and modern perspectives. The paper discusses segmentation insa terms of employment and related problems. Based on statistical analysis, women's unemployment rate and its factors are analyzed. The level of employment of women in Transcaucasia (Georgia, Armenia, and Azerbaijan) is discussed and is compared with Baltic countries (Lithuania, Latvia, and Estonia). The study analyzes women’s level of development, according to the average age of marriage and migration level. The focus is on Georgia's Association Agreement with the EU in 2014, which includes economic, social, trade and political issues. One part of it is gender equality at workplaces. According to the research, the average monthly remuneration of women managers in the financial and insurance sector equaled to 1044.6 Georgian Lari, while in overall business sector average monthly remuneration equaled to 961.1 GEL. Average salaries are increasing; however, the employment rate remains problematic. For example, in 2017, 74.6% of men and 50.8% of women were employed from a total workforce. It is also interesting that the proportion of men and women at managerial positions is 29% (women) to 71% (men). Based on the results, the main recommendation for government and civil society is to consider women as a part of the country’s economic development. In this aspect, the experience of developed countries should be considered. It is important to create additional jobs in urban or rural areas and help migrant women return and use their working resources properly.

Keywords: Employment of women, segregation in terms of employment, women's employment level in Transcaucasia, migration level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
120 Advanced Stochastic Models for Partially Developed Speckle

Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije

Abstract:

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
119 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization

Authors: V. H. Mankar, T. S. Das, S. K. Sarkar

Abstract:

In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.

Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
118 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1279
117 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196

Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek

Abstract:

It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.

Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
116 Response of Diaphragmatic Excursion to Inspiratory Muscle Trainer Post Thoracotomy

Authors: H. M. Haytham, E. A. Azza, E.S. Mohamed, E. G. Nesreen

Abstract:

Thoracotomy is a great surgery that has serious pulmonary complications, so purpose of this study was to determine the response of diaphragmatic excursion to inspiratory muscle trainer post thoracotomy. Thirty patients of both sexes (16 men and 14 women) with age ranged from 20 to 40 years old had done thoracotomy participated in this study. The practical work was done in cardiothoracic department, Kasr-El-Aini hospital at faculty of medicine for individuals 3 days Post operatively. Patients were assigned into two groups: group A (study group) included 15 patients (8 men and 7 women) who received inspiratory muscle training by using inspiratory muscle trainer for 20 minutes and routine chest physiotherapy (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Group B (control group) included 15 patients (8 men and 7 women) who received the routine chest physiotherapy only (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Ultrasonography was used to evaluate the changes in diaphragmatic excursion before and after training program. Statistical analysis revealed a significant increase in diaphragmatic excursion in the study group (59.52%) more than control group (18.66%) after using inspiratory muscle trainer post operatively in patients post thoracotomy. It was concluded that the inspiratory muscle training device increases diaphragmatic excursion in patients post thoracotomy through improving inspiratory muscle strength and improving mechanics of breathing and using of inspiratory muscle trainer as a method of physical therapy rehabilitation to reduce post-operative pulmonary complications post thoracotomy.

Keywords: Diaphragmatic excursion, inspiratory muscle trainer, ultrasonography, thoracotomy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550
115 Iris Recognition Based On the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
114 Review of the Road Crash Data Availability in Iraq

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.

Keywords: Data availability, Iraq, road safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
113 Multiple Targets Classification and Fuzzy Logic Decision Fusion in Wireless Sensor Networks

Authors: Ahmad Aljaafreh

Abstract:

This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.

Keywords: Classification, decision fusion, fuzzy logic, hidden Markov model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6238
112 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
111 The Impact of HIV/AIDS on Micro-enterprise Development in Kenya: A Study of Obunga Slum in Kisumu

Authors: C. A. Oloo, C. Ojwang

Abstract:

The performances of small and medium enterprises have stagnated in the last two decades. This has mainly been due to the emergence of HIV / Aids. The disease has had a detrimental effect on the general economy of the country leading to morbidity and mortality of the Kenyan workforce in their primary age. The present study sought to establish the economic impact of HIV / Aids on the micro-enterprise development in Obunga slum – Kisumu, in terms of production loss, increasing labor related cost and to establish possible strategies to address the impact of HIV / Aids on microenterprises. The study was necessitated by the observation that most micro-enterprises in the slum are facing severe economic and social crisis due to the impact of HIV / Aids, they get depleted and close down within a short time due to death of skilled and experience workforce. The study was carried out between June 2008 and June 2009 in Obunga slum. Data was subjected to computer aided statistical analysis that included descriptive statistic, chi-squared and ANOVA techniques. Chi-squared analysis on the micro-enterprise owners opinion on the impact of HIV / Aids on depletion of microenterprise compared to other diseases indicated high levels of the negative effects of the disease at significance levels of P<0.01. Analysis of variance on the impact of HIV / Aids on the performance and productivity of micro-enterprises also indicated a negative effect on the general performance of micro-enterprise at significance levels of P<0.01. Therefore reducing the negative impacts of HIV/Aids on micro-enterprise development, there is need to improve the socioeconomic environment, mobilize donors and stake holders in training and funding, and review the current strategies for addressing the disease. Further conclusive research should also be conducted on a bigger scale.

Keywords: Entrepreneurship, HIV-AIDS, Micro-enterprise, Poverty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
110 Evaluation and Analysis of Lean-Based Manufacturing Equipment and Technology System for Jordanian Industries

Authors: Mohammad D. AL-Tahat, Shahnaz M. Alkhalil

Abstract:

International markets driven forces are changing continuously, therefore companies need to gain a competitive edge in such markets. Improving the company's products, processes and practices is no longer auxiliary. Lean production is a production management philosophy that consolidates work tasks with minimum waste resulting in improved productivity. Lean production practices can be mapped into many production areas. One of these is Manufacturing Equipment and Technology (MET). Many lean production practices can be implemented in MET, namely, specific equipment configurations, total preventive maintenance, visual control, new equipment/ technologies, production process reengineering and shared vision of perfection.The purpose of this paper is to investigate the implementation level of these six practices in Jordanian industries. To achieve that a questionnaire survey has been designed according to five-point Likert scale. The questionnaire is validated through pilot study and through experts review. A sample of 350 Jordanian companies were surveyed, the response rate was 83%. The respondents were asked to rate the extent of implementation for each of practices. A relationship conceptual model is developed, hypotheses are proposed, and consequently the essential statistical analyses are then performed. An assessment tool that enables management to monitor the progress and the effectiveness of lean practices implementation is designed and presented. Consequently, the results show that the average implementation level of lean practices in MET is 77%, Jordanian companies are implementing successfully the considered lean production practices, and the presented model has Cronbach-s alpha value of 0.87 which is good evidence on model consistency and results validation.

Keywords: Lean Production, SME applications, Visual Control, New equipment/technologies, Specific equipment configurations, Jordan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2288
109 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth

Abstract:

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
108 Climate Change in Albania and Its Effect on Cereal Yield

Authors: L. Basha, E. Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine learning methods, such as Random Forest (RF), are used to predict cereal yield responses to climacteric and other variables. RF showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the RF method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods: multiple linear regression and lasso regression method.

Keywords: Cereal yield, climate change, machine learning, multiple regression model, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200
107 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model

Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi

Abstract:

Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).

Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2550
106 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Authors: N. A. Samat, S. H. Mohd Imam Ma’arof

Abstract:

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3284