Search results for: ‎maximum likelihood estimation‎
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6131

Search results for: ‎maximum likelihood estimation‎

1781 Serum Vitamin D and Carboxy-Terminal TelopeptideType I Collagen Levels: As Markers for Bone Health Affection in Patients Treated with Different Antiepileptic Drugs

Authors: Moetazza M. Al-Shafei, Hala Abdel Karim, Eitedal M. Daoud, Hassan Zaki Hassuna

Abstract:

Epilepsy is a common neurological disorder affecting all age groups. It is one of the world's most prevalent non-communicable diseases. Increased evidence suggesting that long term usage of anti-epileptic drugs can have adverse effects on bone mineralization and bone molding .Aiming to study these effects and to give guide lines to support bone health through early intervention. From Neurology Out-Patient Clinic kaser Elaini University Hospital, 60 Patients were enrolled, 40 patients on antiepileptic drugs for at least two years and 20 controls matched with age and sex, epileptic but before starting treatment both chosen under specific criteria. Patients were divided into four groups, three groups with monotherapy treated with either Phynetoin, Valporic acid or Carbamazipine and fourth group treated with both Valporic acid and Carbamazipine. Estimation of serum Carboxy-Terminal Telopeptide of Type I- Collagen(ICTP) bone resorption marker, serum 25(OH )vit D3, calcium ,magnesium and phosphorus were done .Results showed that all patients on AED had significant low levels of 25(OH) vit D3 (p<0.001) ,with significant elevation of ICTP (P<0.05) versus controls. In group treated with Phynotoin highly significant elevation of (ICTP) marker and decrease of both serum 25(OH) vit D3 (P<0, 0001) and serum calcium(P<0.05)versus control. Double drug group showed significant decrease of serum 25(OH) vit D3 (P<0.0001) and decrease in Phosphorus (P<0.05) versus controls. Serum magnesium showed no significant differences between studied groups. We concluded that Anti- epileptic drugs appears to be an aggravating factor on bone mineralization ,so therapeutically it can be worth wile to supplement calcium and vitamin D even before initiation of antiepileptic therapy. ICTP marker can be used to evaluate change in bone resorption before and during AED therapy.

Keywords: antiepileptic drugs, bone minerals, carboxy teminal telopeptidetype-1-collagen bone resorption marker, vitamin D

Procedia PDF Downloads 479
1780 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance

Authors: Omer Leshem, Michael F. Schober

Abstract:

This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.

Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion

Procedia PDF Downloads 117
1779 Removal Efficiency of Some Heavy Metals from Aqueous Solution on Magnetic Nanoparticles

Authors: Gehan El-Sayed Sharaf El-Deen

Abstract:

In this study, super paramagnetic iron-oxide nano- materials (SPMIN) were investigated for removal of toxic heavy metals from aqueous solution. The magnetic nanoparticles of 12 nm were synthesized using a co-precipitation method and characterized by transmission electron microscopy (TEM), transform infrared spectroscopy (FTIR), x-ray diffraction (XRD) and vibrating sample magnetometer (VSM). Batch experiments carried out to investigate the influence of different parameters such as contact time, initial concentration of metal ions, the dosage of SPMIN, desorption,pH value of solutions. The adsorption process was found to be highly pH dependent, which made the nanoparticles selectively adsorb these three metals from wastewater. Maximum sorption for all the studies cations obtained at the first half hour and reached equilibrium at one hour. The adsorption data of heavy metals studied were well fitted with the Langmuir isotherm and the equilibrium data show the percent removal of Ni2+, Zn2+ and Cd2+ were 96.5%, 80% and 75%, respectively. Desorption studies in acidic medium indicate that Zn2+, Ni2+ and Cd2+ were removed by 89%, 2% and 18% from the first cycle. Regeneration studies indicated that SPMIN nanoparticles undergoing successive adsorption–desorption processes for Zn2+ ions retained original metal removal capacity. The results revealed that the most prominent advantage of the prepared SPMIN adsorbent consisted in their separation convenience compared to the other adsorbents and SPMIN has high efficiency for removal the investigated metals from aqueous solution.

Keywords: heavy metals, magnetic nanoparticles, removal efficiency, Batch technique

Procedia PDF Downloads 233
1778 Mineralogical and Geochemical Characteristics of Serpentinite-Derived Ni-Bearing Laterites from Fars Province, Iran: Implications for the Lateritization Process and Classification of Ni-Laterites

Authors: S. Rasti, M. A. Rajabzadeh

Abstract:

Nickel-bearing laterites occur as two parallel belts along Sedimentary Zagros Orogenic (SZO) and Metamorphic Sanandaj-Sirjan (MSS) petrostructural zones, Fars Province, south Iran. An undisturbed vertical profile of these laterites includes protolith, saprolite, clay, and oxide horizons from base to top. Highly serpentinized harzburgite with relicts of olivine and orthopyroxene is regarded as the source rock. The laterites are unusual in lacking a significant saprolite zone with little development of Ni-silicates. Hematite, saponite, dolomite, smectite and clinochlore increase, while calcite, olivine, lizardite and chrysotile decrease from saprolite to oxide zones. Smectite and clinochlore with minor calcite are the major minerals in clay zone. Contacts of different horizons in laterite profiles are gradual and characterized by a decrease in Mg concentration ranging from 18.1 to 9.3 wt.% in oxide and saprolite, respectively. The maximum Ni concentration is 0.34 wt.% (NiO) in the base of the oxide zone, and goethite is the major Ni-bearing phase. From saprolite to oxide horizons, Al2O3, K2O, TiO2, and CaO decrease, while SiO2, MnO, NiO, and Fe2O3 increase. Silica content reaches up to 45 wt.% in the upper part of the soil profile. There is a decrease in pH (8.44-8.17) and an increase in organic matter (0.28-0.59 wt.%) from base to top of the soils. The studied laterites are classified in the oxide clans which were derived from ophiolite ultramafic rocks under Mediterranean climate conditions.

Keywords: Iran, laterite, mineralogy, ophiolite

Procedia PDF Downloads 316
1777 Effect of Intrinsic Point Defects on the Structural and Optical Properties of SnO₂ Thin Films Grown by Ultrasonic Spray Pyrolysis Method

Authors: Fatiha Besahraoui, M'hamed Guezzoul, Kheira Chebbah, M'hamed Bouslama

Abstract:

SnO₂ thin film is characterized by Atomic Force Microscopy (AFM) and Photoluminescence Spectroscopies. AFM images show a dense surface of columnar grains with a roughness of 78.69 nm. The PL measurements at 7 K reveal the presence of PL peaks centered in IR and visible regions. They are attributed to radiative transitions via oxygen vacancies, Sn interstitials, and dangling bonds. A bands diagram model is presented with the approximate positions of intrinsic point defect levels in SnO₂ thin films. The integrated PL measurements demonstrate the good thermal stability of our sample, which makes it very useful in optoelectronic devices functioning at room temperature. The unusual behavior of the evolution of PL peaks and their full width at half maximum as a function of temperature indicates the thermal sensitivity of the point defects present in the band gap. The shallower energy levels due to dangling bonds and/or oxygen vacancies are more sensitive to the temperature. However, volume defects like Sn interstitials are thermally stable and constitute deep and stable energy levels for excited electrons. Small redshifting of PL peaks is observed with increasing temperature. This behavior is attributed to the reduction of oxygen vacancies.

Keywords: transparent conducting oxide, photoluminescence, intrinsic point defects, semiconductors, oxygen vacancies

Procedia PDF Downloads 68
1776 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 421
1775 A Modelling Study to Compare the Storm Surge along Oman Coast Due to Ashobaa and Nanauk Cyclones

Authors: R. V. Suresh Reddi, Vishnu S. Das, Mathew Leslie

Abstract:

The weather systems within the Arabian Sea is very dynamic in terms of monsoon and cyclone events. The storms generated in the Arabian Sea are more likely to progress in the north-west or west direction towards Oman. From the database of Joint Typhoon Warning Center (JTWC), the number of cyclones that hit the Oman coast or pass within close vicinity is noteworthy and therefore they must be considered when looking at coastal/port engineering design and development projects. This paper provides a case study of two cyclones, i.e., Nanauk (2014) and Ashobaa (2015) to assess the impact on storm surge off the Oman coast. These two cyclones have been selected since they are comparable in terms of maximum wind, cyclone duration, central pressure and month of occurrence. They are of similar strength but differ in track, allowing the impact of proximity to the coast to be considered. Of the two selected cyclones, Ashobaa is the 'extreme' case with close proximity while Nanauk remains further offshore and is considered as a more typical case. The available 'best-track' data from JTWC is obtained for the 2 selected cyclones, and the cyclone winds are generated using a 'Cyclone Wind Generation Tool' from MIKE (modelling software) from DHI (Danish Hydraulic Institute). Using MIKE 21 Hydrodynamic model powered by DHI the storm surge is estimated at selected offshore locations along the Oman coast.

Keywords: costal engineering, cyclone, storm surge, modelling

Procedia PDF Downloads 133
1774 Effects of Process Parameters on the Yield of Oil from Coconut Fruit

Authors: Ndidi F. Amulu, Godian O. Mbah, Maxwel I. Onyiah, Callistus N. Ude

Abstract:

Analysis of the properties of coconut (Cocos nucifera) and its oil was evaluated in this work using standard analytical techniques. The analyses carried out include proximate composition of the fruit, extraction of oil from the fruit using different process parameters and physicochemical analysis of the extracted oil. The results showed the percentage (%) moisture, crude lipid, crude protein, ash, and carbohydrate content of the coconut as 7.59, 55.15, 5.65, 7.35, and 19.51 respectively. The oil from the coconut fruit was odourless and yellowish liquid at room temperature (30oC). The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant differences (P˂0.05) in the yield of oil from coconut flour. The oil yield ranged between 36.25%-49.83%. Lipid indices of the coconut oil indicated the acid value (AV) as 10.05 Na0H/g of oil, free fatty acid (FFA) as 5.03%, saponification values (SV) as 183.26 mgKOH-1 g of oil, iodine value (IV) as 81.00 I2/g of oil, peroxide value (PV) as 5.00 ml/ g of oil and viscosity (V) as 0.002. A standard statistical package minitab version 16.0 program was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to generate various plots such as single effect plot, interactions effect plot and contour plot. The response or yield of oil from the coconut flour was used to develop a mathematical model that correlates the yield to the process variables studied. The maximum conditions obtained that gave the highest yield of coconut oil were leaching time of 2 hrs, leaching temperature of 50 oC and solute/solvent ratio of 0.05 g/ml.

Keywords: coconut, oil-extraction, optimization, physicochemical, proximate

Procedia PDF Downloads 336
1773 Investigating Climate Change Trend Based on Data Simulation and IPCC Scenario during 2010-2030 AD: Case Study of Fars Province

Authors: Leila Rashidian, Abbas Ebrahimi

Abstract:

The development of industrial activities, increase in fossil fuel consumption, vehicles, destruction of forests and grasslands, changes in land use, and population growth have caused to increase the amount of greenhouse gases especially CO2 in the atmosphere in recent decades. This has led to global warming and climate change. In the present paper, we have investigated the trend of climate change according to the data simulation during the time interval of 2010-2030 in the Fars province. In this research, the daily climatic parameters such as maximum and minimum temperature, precipitation and number of sunny hours during the 1977-2008 time interval for synoptic stations of Shiraz and Abadeh and during 1995-2008 for Lar stations and also the output of HADCM3 model in 2010-2030 time interval have been used based on the A2 propagation scenario. The results of the model show that the average temperature will increase by about 1 degree centigrade and the amount of precipitation will increase by 23.9% compared to the observational data. In conclusion, according to the temperature increase in this province, the amount of precipitation in the form of snow will be reduced and precipitations often will occur in the form of rain. This 1-degree centigrade increase during the season will reduce production by 6 to 10% because of shortening the growing period of wheat.

Keywords: climate change, Lars WG, HADCM3, Gillan province, climatic parameters, A2 scenario

Procedia PDF Downloads 200
1772 Evaluation of the Notifiable Diseases Surveillance System, South, Haiti, 2022

Authors: Djeamsly Salomon

Abstract:

Background: Epidemiological surveillance is a dynamic national system used to observe all aspects of the evolution of priority health problems, through: collection, analysis, systematic interpretation of information, and dissemination of results with necessary recommendations. The study was conducted to assess the mandatory disease surveillance system in the Sud Department. Methods: A study was conducted from March to May 2021 with key players involved in surveillance at the level of health institutions in the department . The CDC's 2021 updated guideline was used to evaluate the system. We collected information about the operation, attributes, and usefulness of the surveillance system using interviewer-administered questionnaires. Epi-Info7.2 and Excel 2016 were used to generate the mean, frequencies and proportions. Results: Of 30 participants, 23 (77%) were women. The average age was 39 years[30-56]. 25 (83%) had training in epidemiological surveillance. (50%) of the forms checked were signed by the supervisor. Collection tools were available at (80%). Knowledge of at least 7 notifiable diseases was high (100%). Among the respondents, 29 declared that the collection tools were simple, 27 had already filled in a notification form. The maximum time taken to fill out a form was 10 minutes. The feedback between the different levels was done at (60%). Conclusion: The surveillance system is useful, simple, acceptable, representative, flexible, stable and responsive. The data generated was of high quality. However, it is threatened by the lack of supervision of sentinel sites, lack of investigation and weak feedback. This evaluation demonstrated the urgent need to improve supervision in the sites and to feedback information. Strengthen epidemiological surveillance.

Keywords: evaluation, notifiable diseases, surveillance, system

Procedia PDF Downloads 62
1771 Performance Enhancement of Hybrid Racing Car by Design Optimization

Authors: Tarang Varmora, Krupa Shah, Karan Patel

Abstract:

Environmental pollution and shortage of conventional fuel are the main concerns in the transportation sector. Most of the vehicles use an internal combustion engine (ICE), powered by gasoline fuels. This results into emission of toxic gases. Hybrid electric vehicle (HEV) powered by electric machine and ICE is capable of reducing emission of toxic gases and fuel consumption. However to build HEV, it is required to accommodate motor and batteries in the vehicle along with engine and fuel tank. Thus, overall weight of the vehicle increases. To improve the fuel economy and acceleration, the weight of the HEV can be minimized. In this paper, the design methodology to reduce the weight of the hybrid racing car is proposed. To this end, the chassis design is optimized. Further, attempt is made to obtain the maximum strength with minimum material weight. The best configuration out of the three main configurations such as series, parallel and the dual-mode (series-parallel) is chosen. Moreover, the most suitable type of motor, battery, braking system, steering system and suspension system are identified. The racing car is designed and analyzed in the simulating software. The safety of the vehicle is assured by performing static and dynamic analysis on the chassis frame. From the results, it is observed that, the weight of the racing car is reduced by 11 % without compromising on safety and cost. It is believed that the proposed design and specifications can be implemented practically for manufacturing hybrid racing car.

Keywords: design optimization, hybrid racing car, simulation, vehicle, weight reduction

Procedia PDF Downloads 281
1770 Sustainable Development of HV Substation in Urban Areas Considering Environmental Aspects

Authors: Mahdi Naeemi Nooghabi, Mohammad Tofiqu Arif

Abstract:

Gas Insulated Switchgears by using an insulation material named SF6 (Sulphur Hexafluoride) and its significant dielectric properties have been the only choice in urban areas and other polluted industries. However, the initial investment of GIS is more than conventional AIS substation, its total life cycle costs caused to reach huge amounts of electrical market share. SF6 environmental impacts on global warming, atmosphere depletion, and decomposing to toxic gases in high temperature situation, and highest rate in Global Warming Potential (GWP) with 23900 times of CO2e and a 3200-year period lifetime was the only undeniable concern of GIS substation. Efforts of international environmental institute and their politic supports have been able to lead SF6 emission reduction legislation. This research targeted to find an appropriate alternative for GIS substations to meet all advantages in land occupation area and to improve SF6 environmental impacts due to its leakage and emission. An innovative new conceptual design named Multi-Storey prepared a new AIS design similar in land occupation, extremely low Sf6 emission, and maximum greenhouse gas emission reduction. Surprisingly, by considering economic benefits due to carbon price saving, it can earn more than $675 million during the 30-year life cycle by replacing of just 25% of total annual worldly additional GIS switchgears.

Keywords: AIS substation, GIS substation, SF6, greenhouse gas, global warming potential, carbon price, emission

Procedia PDF Downloads 293
1769 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 258
1768 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre

Authors: L. Nathaniel-Wurie

Abstract:

The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.

Keywords: airway, cricoid, medical education, sellick

Procedia PDF Downloads 62
1767 Hierarchical Optimization of Composite Deployable Bridge Treadway Using Particle Swarm Optimization

Authors: Ashraf Osman

Abstract:

Effective deployable bridges that are characterized by an increased capacity to weight ratio are recently needed for post-disaster rapid mobility and military operations. In deployable bridging, replacing metals as the fabricating material with advanced composite laminates as lighter alternatives with higher strength is highly advantageous. This article presents a hierarchical optimization strategy of a composite bridge treadway considering maximum strength design and bridge weight minimization. Shape optimization of a generic deployable bridge beam cross-section is performed to achieve better stress distribution over the bridge treadway hull. The developed cross-section weight is minimized up to reserving the margins of safety of the deployable bridging code provisions. Hence, the strength of composite bridge plates is maximized through varying the plies orientation. Different loading cases are considered of a tracked vehicle patch load. The orthotropic plate properties of a composite sandwich core are used to simulate the bridge deck structural behavior. Whereas, the failure analysis is conducted using Tsai-Wu failure criterion. The naturally inspired particle swarm optimization technique is used in this study. The proposed technique efficiently reduced the weight to capacity ratio of the developed bridge beam.

Keywords: CFRP deployable bridges, disaster relief, military bridging, optimization of composites, particle swarm optimization

Procedia PDF Downloads 120
1766 Finite Element Analysis and Design Optimization of Stent and Balloon System

Authors: V. Hashim, P. N. Dileep

Abstract:

Stent implantation is being seen as the most successful method to treat coronary artery diseases. Different types of stents are available in the market these days and the success of a stent implantation greatly depends on the proper selection of a suitable stent for a patient. Computer numerical simulation is the cost effective way to choose the compatible stent. Studies confirm that the design characteristics of stent do have great importance with regards to the pressure it can sustain, the maximum displacement it can produce, the developed stress concentration and so on. In this paper different designs of stent were analyzed together with balloon to optimize the stent and balloon system. Commercially available stent Palmaz-Schatz has been selected for analysis. Abaqus software is used to simulate the system. This work is the finite element analysis of the artery stent implant to find out the design factors affecting the stress and strain. The work consists of two phases. In the first phase, stress distribution of three models were compared - stent without balloon, stent with balloon of equal length and stent with balloon of extra length than stent. In second phase, three different design models of Palmaz-Schatz stent were compared by keeping the balloon length constant. The results obtained from analysis shows that, the design of the strut have strong effect on the stress distribution. A design with chamfered slots found better results. The length of the balloon also has influence on stress concentration of the stent. Increase in length of the balloon will reduce stress, but will increase dog boning effect.

Keywords: coronary stent, finite element analysis, restenosis, stress concentration

Procedia PDF Downloads 611
1765 Following the Modulation of Transcriptional Activity of Genes by Chromatin Modifications during the Cell Cycle in Living Cells

Authors: Sharon Yunger, Liat Altman, Yuval Garini, Yaron Shav-Tal

Abstract:

Understanding the dynamics of transcription in living cells has improved since the development of quantitative fluorescence-based imaging techniques. We established a method for following transcription from a single copy gene in living cells. A gene tagged with MS2 repeats, used for mRNA tagging, in its 3' UTR was integrated into a single genomic locus. The actively transcribing gene was detected and analyzed by fluorescence in situ hybridization (FISH) and live-cell imaging. Several cell clones were created that differed in the promoter regulating the gene. Thus, comparative analysis could be obtained without the risk of different position effects at each integration site. Cells in S/G2 phases could be detected exhibiting two adjacent transcription sites on sister chromatids. A sharp reduction in the transcription levels was observed as cells progressed along the cell cycle. We hypothesized that a change in chromatin structure acts as a general mechanism during the cell cycle leading to down-regulation in the activity of some genes. We addressed this question by treating the cells with chromatin decondensing agents. Quantifying and imaging the treated cells suggests that chromatin structure plays a role both in regulating transcriptional levels along the cell cycle, as well as in limiting an active gene from reaching its maximum transcription potential at any given time. These results contribute to understanding the role of chromatin as a regulator of gene expression.

Keywords: cell cycle, living cells, nucleus, transcription

Procedia PDF Downloads 283
1764 Analysis of the Effects of Vibrations on Tractor Drivers by Measurements With Wearable Sensors

Authors: Gubiani Rino, Nicola Zucchiatti, Da Broi Ugo, Bietresato Marco

Abstract:

The problem of vibrations in agriculture is very important due to the different types of machinery used for the different types of soil in which work is carried out. One of the most commonly used machines is the tractor, where the phenomenon has been studied for a long time by measuring the whole body and placing the sensor on the seat. However, this measurement system does not take into account the characteristics of the drivers, such as their body index (BMI), their gender (male, female) or the muscle fatigue they are subjected to, which is highly dependent on their age for example. The aim of the research was therefore to place sensors not only on the seat but along the spinal column to check the transmission of vibration on drivers with different BMI on different tractors and at different travel speeds and of different genders. The test was also done using wearable sensors such as a dynamometer applied to the muscles, the data of which was correlated with the vibrations produced by the tractor. Initial data show that even on new tractors with pneumatic seats, the vibrations attenuate little and are still correlated with the roughness of the track travelled and the forward speed. Another important piece of data are the root-mean square values referred to 8 hours (A(8)x,y,z) and the maximum transient vibration values (MTVVx,y,z) and, the latter, the MTVVz values were problematic (limiting factor in most cases) and always aggravated by the speed. The MTVVx values can be lowered by having a tyre-pressure adjustment system, able to properly adjust the tire pressure according to the specific situation (ground, speed) in which a tractor is operating.

Keywords: fatigue, effect vibration on health, tractor driver vibrations, vibration, muscle skeleton disorders

Procedia PDF Downloads 51
1763 Laboratory Scale Experimental Studies on CO₂ Based Underground Coal Gasification in Context of Clean Coal Technology

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Coal is the largest fossil fuel. In India, around 37 % of coal resources found at a depth of more than 300 meters. In India, more than 70% of electricity production depends on coal. Coal on combustion produces greenhouse and pollutant gases such as CO₂, SOₓ, NOₓ, and H₂S etc. Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts these unmineable coals into valuable calorific gases. The UCG syngas (mainly H₂, CO, CH₄ and some lighter hydrocarbons) which can utilized for the production of electricity and manufacturing of various useful chemical feedstock. It is an inherent clean coal technology as it avoids ash disposal, mining, transportation and storage problems. Gasification of underground coal using steam as a gasifying medium is not an easy process because sending superheated steam to deep underground coal leads to major transportation difficulties and cost effective. Therefore, for reducing this problem, we have used CO₂ as a gasifying medium, which is a major greenhouse gas. This paper focus laboratory scale underground coal gasification experiment on a coal block by using CO₂ as a gasifying medium. In the present experiment, first, we inject oxygen for combustion for 1 hour and when the temperature of the zones reached to more than 1000 ºC, and then we started supplying of CO₂ as a gasifying medium. The gasification experiment was performed at an atmospheric pressure of CO₂, and it was found that the amount of CO produced due to Boudouard reaction (C+CO₂  2CO) is around 35%. The experiment conducted to almost 5 hours. The maximum gas composition observed, 35% CO, 22 % H₂, and 11% CH4 with LHV 248.1 kJ/mol at CO₂/O₂ ratio 0.4 by volume.

Keywords: underground coal gasification, clean coal technology, calorific value, syngas

Procedia PDF Downloads 211
1762 Increasing of Gain in Unstable Thin Disk Resonator

Authors: M. Asl. Dehghan, M. H. Daemi, S. Radmard, S. H. Nabavi

Abstract:

Thin disk lasers are engineered for efficient thermal cooling and exhibit superior performance for this task. However the disk thickness and large pumped area make the use of this gain format in a resonator difficult when constructing a single-mode laser. Choosing an unstable resonator design is beneficial for this purpose. On the other hand, the low gain medium restricts the application of unstable resonators to low magnifications and therefore to a poor beam quality. A promising idea to enable the application of unstable resonators to wide aperture, low gain lasers is to couple a fraction of the out coupled radiation back into the resonator. The output coupling gets dependent on the ratio of the back reflection and can be adjusted independently from the magnification. The excitation of the converging wave can be done by the use of an external reflector. The resonator performance is numerically predicted. First of all the threshold condition of linear, V and 2V shape resonator is investigated. Results show that the maximum magnification is 1.066 that is very low for high quality purposes. Inserting an additional reflector covers the low gain. The reflectivity and the related magnification of a 350 micron Yb:YAG disk are calculated. The theoretical model was based on the coupled Kirchhoff integrals and solved numerically by the Fox and Li algorithm. Results show that with back reflection mechanism in combination with increasing the number of beam incidents on disk, high gain and high magnification can occur.

Keywords: unstable resonators, thin disk lasers, gain, external reflector

Procedia PDF Downloads 398
1761 Node Optimization in Wireless Sensor Network: An Energy Approach

Authors: Y. B. Kirankumar, J. D. Mallapur

Abstract:

Wireless Sensor Network (WSN) is an emerging technology, which has great invention for various low cost applications both for mass public as well as for defence. The wireless sensor communication technology allows random participation of sensor nodes with particular applications to take part in the network, which results in most of the uncovered simulation area, where fewer nodes are located at far distances. The drawback of such network would be that the additional energy is spent by the nodes located in a pattern of dense location, using more number of nodes for a smaller distance of communication adversely in a region with less number of nodes and additional energy is again spent by the source node in order to transmit a packet to neighbours, thereby transmitting the packet to reach the destination. The proposed work is intended to develop Energy Efficient Node Placement Algorithm (EENPA) in order to place the sensor node efficiently in simulated area, where all the nodes are equally located on a radial path to cover maximum area at equidistance. The total energy consumed by each node compared to random placement of nodes is less by having equal burden on fewer nodes of far location, having distributed the nodes in whole of the simulation area. Calculating the network lifetime also proves to be efficient as compared to random placement of nodes, hence increasing the network lifetime, too. Simulation is been carried out in a qualnet simulator, results are obtained on par with random placement of nodes with EENP algorithm.

Keywords: energy, WSN, wireless sensor network, energy approach

Procedia PDF Downloads 296
1760 A Literature Review on Bladder Management in Individuals with Spinal Cord Injury

Authors: Elif Ates, Naile Bilgili

Abstract:

Background: One of the most important medical complications that individuals with spinal cord injury (SCI) face are the neurogenic bladder. Objectives: To review methods used for management of neurogenic bladder and their effects. Methods: The study was conducted by searching CINAHL, Ebscohost, MEDLINE, Science Direct, Ovid, ProQuest, Web of Science, and ULAKBİM National Databases for studies published between 2005 and 2015. Key words used during the search included ‘spinal cord injury’, ‘bladder injury’, ‘nursing care’, ‘catheterization’ and ‘intermittent urinary catheter’. After examination of 551 studies, 21 studies which met inclusion criteria were included in the review. Results: Mean age of individuals in all study samples was 42 years. The most commonly used bladder management method was clean intermittent catheterization (CIC). Compliance with CIC was found to be significantly related to spasticity, maximum cystometric capacity, and the person performing catheterization (p < .05). The main reason for changing the existing bladder management method was urinary tract infections (UTI). Individuals who performed CIC by themselves and who voided spontaneously had better life quality. Patient age, occupation status and whether they performed CIC by themselves or not were found to be significantly associated with depression level (p ≤ .05). Conclusion: As the most commonly used method for bladder management, CIC is a reliable and effective method, and reduces the risk of UTI development. Individuals with neurogenic bladder have a higher prevalence of depression symptoms than the normal population.

Keywords: bladder management, catheterization, nursing, spinal cord injury

Procedia PDF Downloads 162
1759 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai

Authors: Raj Banerjee, Aniruddha Sengupta

Abstract:

An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.

Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum

Procedia PDF Downloads 168
1758 Visualization of Interaction between Pochonia Chlamydosporia and Meloidogyne Incognita and Their Impact on Tomato Crop

Authors: Saifullah K., Muhammad Naziruddin Saifullah, Muhammad N.

Abstract:

The bio control potential and mechanism of P. chlamydosporia against Meloidogyne incognita was evaluated in the present study. Under invitro conditions, P. chlamydosporia was tested for parasitism of eggs and females of M. incognita. The results indicated that this fungus parasitized 87% eggs and 82% females. Culture filtrate (CF) of P. chlamydosporia was tested for its larvicide activity against M. incognita 2nd stage juvenile. The maximum mortality was 97.3% at 100% concentration of the culture filtrate while minimum mortality was 7.3% in 25% concentration after 24 hrs. The result of the pot experiment proved that P. chlamydosporia has reduced the incidence of RKN and improved all tested agronomic growth parameters. The treatment with inoculated M. incognita alone reduced plant height, fresh shoot, and fresh root weight by 44.7%, 29.8%, and 32.8% respectively over uninoculated healthy control. Histopathological studies on the interaction of Pochonia chlamydosporia and Meloidogyne incognita on tomato roots revealed anatomical changes among treatments. Less number of galls with small in size and scarcer abnormalities in the vascular cylinder was observed in plants inoculated with P. chlamydosporia and M. incognita than the plants treated with nematode only. The fungus was seen in in the intercellular spaces of cortical and epidermal cells while the vascular bundles of the plant remain intact, inoculated only with P. chlamydosporia. In the infected roots, many mature females were seen which feed on giant cells. The findings also revealed that control healthy plants were not affected and no histological changes were noted.

Keywords: histopathology, Pochonia chlamydosporia, Meloidogyne incognita, tomato

Procedia PDF Downloads 90
1757 Effects of Rockdust as a Soil Stabilizing Agent on Poor Subgrade Soil

Authors: Muhammad Munawar

Abstract:

Pavement destruction is normally associated with the horizontal relocation of subgrade because of pavement engrossing water and inordinate avoidance and differential settlement of material underneath the pavement. The aim of the research is to study the effect of the additives (rockdust) on the stability and the increase of bearing capacity of selected soils in Mardan City. The physical, chemical and designing properties of soil were contemplated, and the soil was treated with added admixture rockdust with the goal of stabilizing the local soil. The stabilization or modification of soil is done by blending of rock dust to soils in the scope of 0 to 85% by the rate increment of 5%, 10%, and 15% individually. The following test was done for treated sample: Atterberg limits (liquid limit, plasticity index, plastic limit), standard compaction test, the California bearing test and the direct shear test. The results demonstrated that the gradation of soil is narrow from the particle size analysis. Plasticity index (P.I), Liquid limit (L.L) and plastic limit (P.L) were shown reduction with the addition of Rock dust. It was concluded that the maximum dry density is increasing with the addition of rockdust up to 10%, beyond 10%, it shows reduction in their content. It was discovered that the Cohesion C diminished, the angle of internal friction and the California bearing ratio (C.B.R) was improved with the addition of Rock dust. The investigation demonstrated that the best stabilizer for the contextual investigation (Toru road Mardan) is the rock dust and the ideal dosage is 10 %.

Keywords: rockdust, stabilization, modification, CBR

Procedia PDF Downloads 262
1756 Long-Term Monitoring and Seasonal Analysis of PM10-Bound Benzo(a)pyrene in the Ambient Air of Northwestern Hungary

Authors: Zs. Csanádi, A. Szabó Nagy, J. Szabó, J. Erdős

Abstract:

Atmospheric aerosols have several important environmental impacts and health effects in point of air quality. Monitoring the PM10-bound polycyclic aromatic hydrocarbons (PAHs) could have important environmental significance and health protection aspects. Benzo(a)pyrene (BaP) is the most relevant indicator of these PAH compounds. In Hungary, the Hungarian Air Quality Network provides air quality monitoring data for several air pollutants including BaP, but these data show only the annual mean concentrations and maximum values. Seasonal variation of BaP concentrations comparing the heating and non-heating periods could have important role and difference as well. For this reason, the main objective of this study was to assess the annual concentration and seasonal variation of BaP associated with PM10 in the ambient air of Northwestern Hungary seven different sampling sites (six urban and one rural) in the sampling period of 2008–2013. A total of 1475 PM10 aerosol samples were collected in the different sampling sites and analyzed for BaP by gas chromatography method. The BaP concentrations ranged from undetected to 8 ng/m3 with the mean value range of 0.50-0.96 ng/m3 referring to all sampling sites. Relatively higher concentrations of BaP were detected in samples collected in each sampling site in the heating seasons compared with non-heating periods. The annual mean BaP concentrations were comparable with the published data of the other Hungarian sites.

Keywords: air quality, benzo(a)pyrene, PAHs, polycyclic aromatic hydrocarbons

Procedia PDF Downloads 290
1755 Biosorption of Manganese Mine Effluents Using Crude Chitin from Philippine Bivalves

Authors: Randy Molejona Jr., Elaine Nicole Saquin

Abstract:

The area around the Ajuy river in Iloilo, Philippines, is currently being mined for manganese ore, and river water samples exceed the maximum manganese contaminant level set by US-EPA. At the same time, the surplus of local bivalve waste is another environmental concern. Synthetic chemical treatment compromises water quality, leaving toxic residues. Therefore, an alternative treatment process is biosorption or using the physical and chemical properties of biomass to adsorb heavy metals in contaminated water. The study aims to extract crude chitin from shell wastes of Bractechlamys vexillum, Perna viridis, and Placuna placenta and determine its adsorption capacity on manganese in simulated and actual mine water. Crude chitin was obtained by pulverization, deproteinization, demineralization, and decolorization of shells. Biosorption by flocculation followed 5 g: 50 mL chitin-to-water ratio. Filtrates were analyzed using MP-AES after 24 hours. In both actual and simulated mine water, respectively, B. vexillum yielded the highest adsorption percentage of 91.43% and 99.58%, comparable to P. placenta of 91.43% and 99.37%, while significantly different to P. viridis of -57.14% and 31.53%, (p < 0.05). FT-IR validated the presence of chitin in shells based on carbonyl-containing functional groups at peaks 1530-1560 cm⁻¹ and 1660-1680 cm⁻¹. SEM micrographs showed the amorphous and non-homogenous structure of chitin. Thus, crude chitin from B. vexillum and P. placenta can be bio-sorbents for water treatment of manganese-impacted effluents, and promote appropriate waste management of local bivalves.

Keywords: biosorption, chitin, FT-IR, mine effluents, SEM

Procedia PDF Downloads 178
1754 Technology in the Calculation of People Health Level: Design of a Computational Tool

Authors: Sara Herrero Jaén, José María Santamaría García, María Lourdes Jiménez Rodríguez, Jorge Luis Gómez González, Adriana Cercas Duque, Alexandra González Aguna

Abstract:

Background: Health concept has evolved throughout history. The health level is determined by the own individual perception. It is a dynamic process over time so that you can see variations from one moment to the next. In this way, knowing the health of the patients you care for, will facilitate decision making in the treatment of care. Objective: To design a technological tool that calculates the people health level in a sequential way over time. Material and Methods: Deductive methodology through text analysis, extraction and logical knowledge formalization and education with expert group. Studying time: September 2015- actually. Results: A computational tool for the use of health personnel has been designed. It has 11 variables. Each variable can be given a value from 1 to 5, with 1 being the minimum value and 5 being the maximum value. By adding the result of the 11 variables we obtain a magnitude in a certain time, the health level of the person. The health calculator allows to represent people health level at a time, establishing temporal cuts being useful to determine the evolution of the individual over time. Conclusion: The Information and Communication Technologies (ICT) allow training and help in various disciplinary areas. It is important to highlight their relevance in the field of health. Based on the health formalization, care acts can be directed towards some of the propositional elements of the concept above. The care acts will modify the people health level. The health calculator allows the prioritization and prediction of different strategies of health care in hospital units.

Keywords: calculator, care, eHealth, health

Procedia PDF Downloads 246
1753 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 63
1752 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 121