Search results for: total error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10469

Search results for: total error

9779 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 144
9778 Evaluation of Vehicle Classification Categories: Florida Case Study

Authors: Ren Moses, Jaqueline Masaki

Abstract:

This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.

Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic

Procedia PDF Downloads 175
9777 Improving Productivity in a Glass Production Line through Applying Principles of Total Productive Maintenance (TPM)

Authors: Omar Bataineh

Abstract:

Total productive maintenance (TPM) is a principle-based method that aims to get a high-level production with no breakdowns, no slow running and no defects. Key principles of TPM were applied in this work to improve the performance of the glass production line at United Beverage Company in Kuwait, which is producing bottles of soft drinks. Principles such as 5S as a foundation for TPM implementation, developing a program for equipment management, Cause and Effect Analysis (CEA), quality improvement, training and education of employees were employed. After the completion of TPM implementation, it was possible to increase the Overall Equipment Effectiveness (OEE) from 23% to 40%.

Keywords: OEE, TPM, FMEA, CEA

Procedia PDF Downloads 333
9776 In vitro Bioacessibility of Phenolic Compounds from Fruit Spray Dried and Lyophilized Powder

Authors: Carolina Beres, Laurine Da Silva, Danielle Pereira, Ana Ribeiro, Renata Tonon, Caroline Mellinger-Silva, Karina Dos Santos, Flavia Gomes, Lourdes Cabral

Abstract:

The health benefits of bioactive compounds such as phenolics are well known. The main source of these compounds are fruits and derivates. This study had the objective to study the bioacessibility of phenolic compounds from grape pomace and juçara dried extracts. For this purpose both characterized extracts were submitted to a simulated human digestion and the total phenolic content, total anthocyanins and antioxidant scavenging capacity was determinate in digestive fractions (oral, gastric, intestinal and colonic). Juçara had a higher anthocianins bioacessibility (17.16%) when compared to grape pomace (2.08%). The opposite result was found for total phenolic compound, where the higher bioacessibility was for grape (400%). The phenolic compound increase indicates a more accessible compound in the human gut. The lyophilized process had a beneficial impact in the final accessibility of the phenolic compounds being a more promising technique.

Keywords: bioacessibility, phenolic compounds, grape, juçara

Procedia PDF Downloads 208
9775 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 232
9774 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 46
9773 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 438
9772 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 378
9771 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 248
9770 Application of Space Technology at Cadestral Level and Land Resources Management with Special Reference to Bhoomi Sena Project of Uttar Pradesh, India

Authors: A. K. Srivastava, Sandeep K. Singh, A. K. Kulshetra

Abstract:

Agriculture is the backbone of developing countries of Asian sub-continent like India. Uttar Pradesh is the most populous and fifth largest State of India. Total population of the state is 19.95 crore, which is 16.49% of the country that is more than that of many other countries of the world. Uttar Pradesh occupies only 7.36% of the total area of India. It is a well-established fact that agriculture has virtually been the lifeline of the State’s economy in the past for long and its predominance is likely to continue for a fairly long time in future. The total geographical area of the state is 242.01 lakh hectares, out of which 120.44 lakh hectares is facing various land degradation problems. This needs to be put under various conservation and reclamation measures at much faster pace in order to enhance agriculture productivity in the State. Keeping in view the above scenario Department of Agriculture, Government of Uttar Pradesh has formulated a multi-purpose project namely Bhoomi Sena for the entire state. The main objective of the project is to improve the land degradation using low cost technology available at village level. The total outlay of the project is Rs. 39643.75 Lakhs for an area of about 226000 ha included in the 12th Five Year Plan (2012-13 to 2016-17). It is expected that the total man days would be 310.60 lakh. An attempt has been made to use the space technology like remote sensing, geographical information system, at cadastral level for the overall management of agriculture engineering work which is required for the treatment of degradation of the land. After integration of thematic maps a proposed action plan map has been prepared for the future work.

Keywords: GPS, GIS, remote sensing, topographic survey, cadestral mapping

Procedia PDF Downloads 304
9769 Assessment of Microbiological Status of Branded and Street Vended Ice-Cream Offered for Public Consumption: A Comparative Study in Tangail Municipality, Bangladesh

Authors: Afroza Khatun, Masuma, Md. Younus Mia, Kamal Kanta Das

Abstract:

Analysis of the microbial status and physicochemical parameters of some branded and street vended ice cream showed that total viable bacteria in branded ice cream ranged from 4.8×10³ to 1.10×10⁵ cfu/ml, and in street vended ice-cream ranged from 7.5×10⁴ to 1.6×10⁸ cfu/ml. Total coliform bacteria present up to 9.20×10³ cfu/ml in branded ice cream and 5.3×10³ to 9.6×10⁶ cfu/ml observed in street vended ice cream. Total E. coli were found to be present within a range from 0 to 4.5×10³ cfu/ml in branded and 4.1×10² to 7.5×10⁴ cfu/ml in street ice cream. The ranges of Staphylococcus aureus count were 1.8×10² to 2.9×10⁴ cfu/ml (branded) and 3.9×10⁴ to 7.9×10⁶ cfu/ml (street). The pH of both types of ice cream showed acidic to neutral conditions where the concentration of pH for branded ice cream was 5.5 to 6.9, as well as the value of pH in street ice cream, was 6.2 to 7.0. The range of Total soluble solids in several branded ice creams was 26 to 29%, and the value of TSS obtained in street-vended ice-creams ranged from 5 to 10%. The overall results of this research demonstrated that the microbial quality in all street ice creams exceeded the BSTI standard and exhibited lower quality than the industrially produced branded ice creams due to comparatively faulty manufacturing processes and poor hygiene practices. The presence of pathogenic microbes was also observed in branded ice creams which was quite alarming for public health. So it is suggested that the government authorized organization should conduct the proper monitoring system to ensure that both branded and street vended ice-creams are microbiologically safe to prevent public health hazards.

Keywords: food safety, microbiological analysis, physicochemical, ice-cream, E. coli, Staphylococcus aureus

Procedia PDF Downloads 77
9768 Implementation of Total Quality Management in a Small Scale Industry: A Case Study

Authors: Soham Lalwala, Ronita Singh, Yaman Pattanaik

Abstract:

In the present scenario of globalization and privatization, it becomes difficult for small scale industries to sustain due to rapidly increasing competition. In a developing country, most of the gross output is generally obtained from small scale industries. Thus, quality plays a vital role in maintaining customer satisfaction. Total quality management (TQM) is an approach which enables employees to focus on quality rather quantity, further improving the competitiveness, effectiveness and flexibility of the whole organization. The objective of the paper is to present the application of TQM and develop a TQM Model in a small scale industry of narrow fabrics in Surat, India named ‘Rajdhani Lace & Borders’. Further, critical success factors relating all the fabric processes involved were identified. The data was collected by conducting a questionnaire survey. After data was collected, critical areas were visualized using different tools of TQM such as cause and effect diagram, control charts and run charts. Overall, responses were analyzed, and factor analysis was used to develop the model. The study presented here will aid the management of the above-mentioned industry in identifying the weaker areas and thus give a plausible solution to improve the total productivity of the firm along with effective utilization of resources and better customer satisfaction.

Keywords: critical success factors, narrow fabrics, quality, small scale industries, total quality management (TQM)

Procedia PDF Downloads 248
9767 Impact of Building Orientation on Energy Performance of Buildings in Kabul, Afghanistan

Authors: Mustafa Karimi, Chikamoto Tomoyuki

Abstract:

The building sector consumes 36% of total global energy used, whereas only residential buildings are responsible for 22% of that. In residential buildings, energy used for space heating and cooling represents the majority part of the total energy consumption. Although Afghanistan is amongst the lowest in energy usage globally, residential buildings’ energy consumption has caused serious environmental issues, especially in the capital city, Kabul. After decades of war in Afghanistan, redevelopment of the built environment started from scratch in the past years; therefore, to create sustainable urban areas, it is critical to find the most energy-efficient design parameters for buildings that will last for decades. This study aims to assess the impact of building orientation on the energy performance of buildings in Kabul. It is found that the optimal orientation for buildings in Kabul is South and South-southeast, while West-northwest and Northeast orientations are the worst in terms of energy performance. The difference in the total energy consumption between the best and the worst orientation is 17.5%.

Keywords: building orientation, energy consumption, residential buildings, Kabul, environmental issues

Procedia PDF Downloads 123
9766 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 118
9765 Prevalence of Anemia and Iron Deficiency in Women of Childbearing Age in the North-West of Libya

Authors: Mustafa Ali Abugila, Basma Nuri Kajruba, Hanan Elhadi, Rehab Ramadan Wali

Abstract:

Iron deficiency anemia is characterized by a decrease of Hb (hemoglobin), serum iron, ferritin, and RBC (red blood cells) (shape and size). Also, it is characterized by an increase in total iron binding capacity (TIBC). Red blood cells become microctytic and hypochromic due to a decrease in iron content. This study was conducted in the north west of Libya and included 210 women in childbearing age (18-45 years) who were visiting women clinic. After filling the questionnaire, blood samples were taken and analyzed for hematological and biochemical profiles. Biochemical tests included measurement of serum iron, ferritin, and total iron binding capacity (TIBC). Among the total sample (210 women), there were 87 (41.42%) pregnant and 123 (58.57%) non-pregnant women (includes married and single). Pregnant women (87) were classified according to the gestational age into first, second, and third trimesters. The means of biochemical and hematological parameters in the studied samples were: Hb = 10.37± 2.02 g/dl, RBC = 3.78± 1.037 m/m3, serum iron 61.86± 40.28 µg/dl, and TIBC = 386.01 ± 94.91 µg/dl. In this study, we considered that any women have hemoglobin below 11.5 g/dl is anemic. 89.1%, 69.5%, and 47.8% of pregnant women who belong to third trimester had low (below normal value) Hb, serum iron, and ferritin, i.e. iron deficiency anemia was more common in third trimester among the first and the second trimesters. Third trimester pregnant women also had high TIBC more than first and second trimesters.

Keywords: red blood cells, hemoglobin, total iron binding capacity, ferritin

Procedia PDF Downloads 522
9764 Bacteriological Characterization of Drinking Water Distribution Network Biofilms by Gene Sequencing Using Different Pipe Materials

Authors: M. Zafar, S. Rasheed, Imran Hashmi

Abstract:

Very little is concerned about the bacterial contamination in drinking water biofilm which provide a potential source for bacteria to grow and increase rapidly. So as to understand the microbial density in DWDs, a three-month study was carried out. The aim of this study was to examine biofilm in three different pipe materials including PVC, PPR and GI. A set of all these pipe materials was installed in DWDs at nine different locations and assessed on monthly basis. Drinking water quality was evaluated by different parameters and characterization of biofilm. Among various parameters are Temperature, pH, turbidity, TDS, electrical conductivity, BOD, COD, total phosphates, total nitrates, total organic carbon (TOC) free chlorine and total chlorine, coliforms and spread plate counts (SPC) according to standard methods. Predominant species were Bacillus thuringiensis, Pseudomonas fluorescens , Staphylococcus haemolyticus, Bacillus safensis and significant increase in bacterial population was observed in PVC pipes while least in cement pipes. The quantity of DWDs bacteria was directly depended on biofilm bacteria and its increase was correlated with growth and detachment of bacteria from biofilms. Pipe material also affected the microbial community in drinking water distribution network biofilm while Similarity in bacterial species was observed between systems due to same disinfectant dose, time period and plumbing pipes.

Keywords: biofilm, DWDs, pipe material, bacterial population

Procedia PDF Downloads 342
9763 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,

Procedia PDF Downloads 265
9762 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 179
9761 Simple Infrastructure in Measuring Countries e-Government

Authors: Sukhbaatar Dorj, Erdenebaatar Altangerel

Abstract:

As alternative to existing e-government measuring models, here proposed a new customer centric, service oriented, simple approach for measuring countries e-Governments. If successfully implemented, built infrastructure will provide a single e-government index number for countries. Main schema is as follows. Country CIO or equal position government official, at the beginning of each year will provide to United Nations dedicated web site 4 numbers on behalf of own country: 1) Ratio of available online public services, to total number of public services, 2) Ratio of interagency inter ministry online public services to total number of available online public services, 3) Ratio of total number of citizen and business entities served online annually to total number of citizen and business entities served annually online and physically on those services, 4) Simple index for geographical spread of online served citizen and business entities. 4 numbers then combined into one index number by mathematical Average function. In addition to 4 numbers 5th number can be introduced as service quality indicator of online public services. If in ordering of countries index number is equal, 5th criteria will be used. Notice: This approach is for country’s current e-government achievement assessment, not for e-government readiness assessment.

Keywords: countries e-government index, e-government, infrastructure for measuring e-government, measuring e-government

Procedia PDF Downloads 323
9760 Students' Errors in Translating Algebra Word Problems to Mathematical Structure

Authors: Ledeza Jordan Babiano

Abstract:

Translating statements into mathematical notations is one of the processes in word problem-solving. However, based on the literature, students still have difficulties with this skill. The purpose of this study was to investigate the translation errors of the students when they translate algebraic word problems into mathematical structures and locate the errors via the lens of the Translation-Verification Model. Moreover, this qualitative research study employed content analysis. During the data-gathering process, the students were asked to answer a six-item algebra word problem questionnaire, and their answers were analyzed by experts through blind coding using the Translation-Verification Model to determine their translation errors. After this, a focus group discussion was conducted, and the data gathered was analyzed through thematic analysis to determine the causes of the students’ translation errors. It was found out that students’ prevalent error in translation was the interpretation error, which was situated in the Attribute construct. The emerging themes during the FGD were: (1) The procedure of translation is strategically incorrect; (2) Lack of comprehension; (3) Algebra concepts related to difficulty; (4) Lack of spatial skills; (5) Unprepared for independent learning; and (6) The content of the problem is developmentally inappropriate. These themes boiled down to the major concept of independent learning preparedness in solving mathematical problems. This concept has subcomponents, which include contextual and conceptual factors in translation. Consequently, the results provided implications for instructors and professors in Mathematics to innovate their teaching pedagogies and strategies to address translation gaps among students.

Keywords: mathematical structure, algebra word problems, translation, errors

Procedia PDF Downloads 46
9759 Impact on the Yield of Flavonoid and Total Phenolic Content from Pomegranate Fruit by Different Extraction Methods

Authors: Udeshika Yapa Bandara, Chamindri Witharana, Preethi Soysa

Abstract:

Pomegranate fruits are used in cancer treatment in Ayurveda, Sri Lanka. Due to prevailing therapeutic effects of phytochemicals, this study was focus on anti-cancer properties of the constituents in the parts of Pomegranate fruit. Furthermore, the method of extraction, plays a crucial step of the phytochemical analysis. Therefore, this study was focus on different extraction methods. Five techniques were involved for the peel and the pericarp to evaluate the most effective extraction method; Boiling with electric burner (BL), Sonication (SN), Microwaving (MC), Heating in a 50°C water bath (WB) and Sonication followed by Microwaving (SN-MC). The presence of polyphenolic and flavonoid contents were evaluated to recognize the best extraction method for polyphenols. The total phenolic content was measured spectrophotometrically by Folin-Ciocalteu method and expressed as Gallic Acid Equivalents (w/w% GAE). Total flavonoid content was also determined spectrophotometrically with Aluminium chloride colourimetric assay and expressed as Quercetin Equivalents (w/w % QE). Pomegranate juice was taken as fermented juice (with Saccharomyces bayanus) and fresh juice. Powdered seeds were refluxed, filtered and freeze-dried. 2g of freeze-dried powder of each component was dissolved in 100ml of De-ionized water for extraction. For the comparison of antioxidant activity and total phenol content, the polyphenols were removed by the Polyvinylpolypyrrolidone (PVVP) column and fermented and fresh juice were tested for the 1, 1-diphenyl-2-picrylhydrazil (DPPH) radical scavenging activity, before and after the removal of polyphenols. For the peel samples of Pomegranate fruit, total phenol and flavonoid contents were high in Sonication (SN). In pericarp, total phenol and flavonoid contents were highly exhibited in method of Sonication (SN). A significant difference was observed (P< 0.05) in total phenol and flavonoid contents, between five extraction methods for both peel and pericarp samples. Fermented juice had a greatest polyphenolic and flavonoid contents comparative to fresh juice. After removing polyphenols of fermented juice and fresh juice using Polyvinyl polypyrrolidone (PVVP) column, low antioxidant activity was resulted for DPPH antioxidant activity assay. Seeds had a very low total phenol and flavonoid contents according to the results. Although, Pomegranate peel is the main waste component of the fruit, it has an excellent polyphenolic and flavonoid contents compared to other parts of the fruit, devoid of the method of extraction. Polyphenols play a major role for antioxidant activity.

Keywords: antioxidant activity, flavonoids, polyphenols, pomegranate

Procedia PDF Downloads 156
9758 Surface Roughness Prediction Using Numerical Scheme and Adaptive Control

Authors: Michael K.O. Ayomoh, Khaled A. Abou-El-Hossein., Sameh F.M. Ghobashy

Abstract:

This paper proposes a numerical modelling scheme for surface roughness prediction. The approach is premised on the use of 3D difference analysis method enhanced with the use of feedback control loop where a set of adaptive weights are generated. The surface roughness values utilized in this paper were adapted from [1]. Their experiments were carried out using S55C high carbon steel. A comparison was further carried out between the proposed technique and those utilized in [1]. The experimental design has three cutting parameters namely: depth of cut, feed rate and cutting speed with twenty-seven experimental sample-space. The simulation trials conducted using Matlab software is of two sub-classes namely: prediction of the surface roughness readings for the non-boundary cutting combinations (NBCC) with the aid of the known surface roughness readings of the boundary cutting combinations (BCC). The following simulation involved the use of the predicted outputs from the NBCC to recover the surface roughness readings for the boundary cutting combinations (BCC). The simulation trial for the NBCC attained a state of total stability in the 7th iteration i.e. a point where the actual and desired roughness readings are equal such that error is minimized to zero by using a set of dynamic weights generated in every following simulation trial. A comparative study among the three methods showed that the proposed difference analysis technique with adaptive weight from feedback control, produced a much accurate output as against the abductive and regression analysis techniques presented in this.

Keywords: Difference Analysis, Surface Roughness; Mesh- Analysis, Feedback control, Adaptive weight, Boundary Element

Procedia PDF Downloads 618
9757 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 137
9756 Effect of Punch and Die Profile Radii on the Maximum Drawing Force and the Total Consumed Work in Deep Drawing of a Flat Ended Cylindrical Brass

Authors: A. I. O. Zaid

Abstract:

Deep drawing is considered to be the most widely used sheet metal forming processes among the particularly in automobile and aircraft industries. It is widely used for manufacturing a large number of the body and spare parts. In its simplest form it may be defined as a secondary forming process by which a sheet metal is formed into a cylinder or alike by subjecting the sheet to compressive force through a punch with a flat end of the same geometry as the required shape of the cylinder end while it is held by a blank holder which hinders its movement but does not stop it. The punch and die profile radii play In this paper, the effects of punch and die profile radii on the autographic record, the minimum thickness strain location where the cracks normally start and cause the fracture, the maximum deep drawing force and the total consumed work in the drawing flat ended cylindrical brass cups are investigated. Five punches and five dies each having different profile radii were manufactured for this investigation. Furthermore, their effect on the quality of the drawn cups is also presented and discussed. It was found that the die profile radius has more effect on the maximum drawing force and the total consumed work than the punch profile radius.

Keywords: punch and die profile radii, deep drawing process, maximum drawing force, total consumed work, quality of produced parts, flat ended cylindrical brass cups

Procedia PDF Downloads 334
9755 Testing a Motivational Model of Physical Education on Contextual Outcomes and Total Moderate to Vigorous Physical Activity of Middle School Students

Authors: Arto Grasten

Abstract:

Given the rising trend in obesity in children and youth, age-related decline in moderate- to- vigorous-intensity physical activity (MVPA) in several Western, African, and Asian countries in addition to limited evidence of behavioral, affective, cognitive outcomes in physical education, it is important to clarify the motivational processes in physical education classes behind total MVPA engagement. The present study examined the full sequence of the Hierarchical Model of Motivation in physical education including motivational climate, basic psychological needs, intrinsic motivation, contextual behavior, affect, cognition, total MVPA, and associated links to body mass index (BMI) and gender differences. A cross-sectional data comprised self-reports and objective assessments of 770 middle school students (Mage = 13.99 ± .81 years, 52% of girls) in North-East Finland. In order to test the associations between motivational climate, psychological needs, intrinsic motivation, cognition, behavior, affect, and total MVPA, a path model was implemented. Indirect effects between motivational climate and cognition, behavior, affect and total MVPA were tested by setting basic needs and intrinsic motivation as mediators into the model. The findings showed that direct and indirect paths for girls and boys associated with different contextual outcomes and girls’ indirect paths were not related with total MVPA. Precisely, task-involving climate-mediated by physical competence and intrinsic motivation related to enjoyment, importance, and graded assessments within girls, whereas task-involving climate associated with enjoyment and importance via competence and autonomy, and total MVPA via autonomy, intrinsic motivation, and importance within boys. Physical education assessments appeared to be essential in motivating students to participate in greater total MVPA. BMI was negatively linked with competence and relatedness only among girls. Although, the current and previous empirical findings supported task-involving teaching methods in physical education, in some cases, ego-involving climate should not be totally avoided. This may indicate that girls and boys perceive physical education classes in a different way. Therefore, both task- and ego-involving teaching practices can be useful ways of driving behavior in physical education classes.

Keywords: achievement goal theory, assessment, enjoyment, hierarchical model of motivation, physical activity, self-determination theory

Procedia PDF Downloads 275
9754 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization

Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang

Abstract:

The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.

Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling

Procedia PDF Downloads 243
9753 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 166
9752 The Acute Impact of the Intake of Breadsticks from Different Durum Wheat Flour Mixtures on Postprandial Inflammation, Oxidative Stress, and Antiplatelet Activity in Healthy Volunteers: A Pilot Cross-Over Nutritional Intervention

Authors: O. I. Papagianni, P. Potsaki, K. Almpounioti, D. Chatzicharalampous, A. Voutsa, O. Katira, A. Michalaki, H. C. Karantonis, A. E. Koutelidakis

Abstract:

High intakes of carbohydrates and fats have been associated with an increased risk of chronic diseases due to the role of postprandial oxidative stress. This pilot nutritional intervention aimed to examine the acute effect of consuming two different types of breadsticks prepared from durum wheat flour mixtures differing in total phenolic content on postprandial inflammatory and oxidant responses in healthy volunteers. A cross-over, controlled, and single-blind clinical trial was designed, and two isocaloric high-fat and high-carbohydrate meals were tested. Serum total, HDL- and LDL-cholesterol, triglycerides, glucose, CRP, uric acid, plasma total antioxidant capacity, and antiplatelet activity were determined in fasting and 30, 60, and 120 min after consumption. The results showed a better postprandial HDL-cholesterol and total antioxidant activity response in the intervention group. The choice of durum wheat flours with higher phenolic content and antioxidant activity is presented as promising for human health, and clinical studies will expand to draw safer conclusions.

Keywords: breadsticks, durum wheat flours, postprandial inflammation, postprandial oxidative stress, ex vivo antiplatelet activity

Procedia PDF Downloads 66
9751 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 244
9750 Government Final Consumption Expenditure and Household Consumption Expenditure NPISHS in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp (financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: government final consumption expenditure, household consumption expenditure, vector error correction model, cointegration

Procedia PDF Downloads 45