Search results for: ordered statistical
3386 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions
Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba
Abstract:
Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial
Procedia PDF Downloads 1713385 The Spatial Pattern of Economic Rents of an Airport Development Area: Lessons Learned from the Suvarnabhumi International Airport, Thailand
Authors: C. Bejrananda, Y. Lee, T. Khamkaew
Abstract:
With the rise of the importance of air transportation in the 21st century, the role of economics in airport planning and decision-making has become more important to the urban structure and land value around it. Therefore, this research aims to examine the relationship between an airport and its impacts on the distribution of urban land uses and land values by applying the Alonso’s bid rent model. The New Bangkok International Airport (Suvarnabhumi International Airport) was taken as a case study. The analysis was made over three different time periods of airport development (after the airport site was proposed, during airport construction, and after the opening of the airport). The statistical results confirm that Alonso’s model can be used to explain the impacts of the new airport only for the northeast quadrant of the airport, while proximity to the airport showed the inverse relationship with the land value of all six types of land use activities through three periods of time. It indicates that the land value for commercial land use is the most sensitive to the location of the airport or has the strongest requirement for accessibility to the airport compared to the residential and manufacturing land use. Also, the bid-rent gradients of the six types of land use activities have declined dramatically through the three time periods because of the Asian Financial Crisis in 1997. Therefore, the lesson learned from this research concerns about the reliability of the data used. The major concern involves the use of different areal units for assessing land value for different time periods between zone block (1995) and grid block (2002, 2009). As a result, this affect the investigation of the overall trends of land value assessment, which are not readily apparent. In addition, the next concern is the availability of the historical data. With the lack of collecting historical data for land value assessment by the government, some of data of land values and aerial photos are not available to cover the entire study area. Finally, the different formats of using aerial photos between hard-copy (1995) and digital photo (2002, 2009) made difficult for measuring distances. Therefore, these problems also affect the accuracy of the results of the statistical analyses.Keywords: airport development area, economic rents, spatial pattern, suvarnabhumi international airport
Procedia PDF Downloads 2743384 Can Career Advancement and Job Security Act as Collaterals for Commitment? Evidence from the Hotel Industry of Malaysia
Authors: Aizzat Md. Nasurdin, Noor Hazlina Ahmad, Cheng Ling Tan
Abstract:
This study aims to examine the role of career advancement and job security as predictors of employee commitment to their organization. Data was collected from 580 frontline employees attached to two departments of 29 luxury hotels in Peninsular Malaysia. Statistical results using Partial Least Squares technique provided support for the proposed hypotheses. In view of the findings, theoretical and practical implications are discussed.Keywords: organizational commitment, career advancement, job security, frontline employees, luxury hotels, Malaysia
Procedia PDF Downloads 3913383 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia
Authors: Zeinu Ahmed Rabba, Derek D Stretch
Abstract:
Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase
Procedia PDF Downloads 2853382 TiO₂ Nanotube Array Based Selective Vapor Sensors for Breath Analysis
Authors: Arnab Hazra
Abstract:
Breath analysis is a quick, noninvasive and inexpensive technique for disease diagnosis can be used on people of all ages without any risk. Only a limited number of volatile organic compounds (VOCs) can be associated with the occurrence of specific diseases. These VOCs can be considered as disease markers or breath markers. Selective detection with specific concentration of breath marker in exhaled human breath is required to detect a particular disease. For example, acetone (C₃H₆O), ethanol (C₂H₅OH), ethane (C₂H₆) etc. are the breath markers and abnormal concentrations of these VOCs in exhaled human breath indicates the diseases like diabetes mellitus, renal failure, breast cancer respectively. Nanomaterial-based vapor sensors are inexpensive, small and potential candidate for the detection of breath markers. In practical measurement, selectivity is the most crucial issue where trace detection of breath marker is needed to identify accurately in the presence of several interfering vapors and gases. Current article concerns a novel technique for selective and lower ppb level detection of breath markers at very low temperature based on TiO₂ nanotube array based vapor sensor devices. Highly ordered and oriented TiO₂ nanotube array was synthesized by electrochemical anodization of high purity tatinium (Ti) foil. 0.5 wt% NH₄F, ethylene glycol and 10 vol% H₂O was used as the electrolyte and anodization was carried out for 90 min with 40 V DC potential. Au/TiO₂ Nanotube/Ti, sandwich type sensor device was fabricated for the selective detection of VOCs in low concentration range. Initially, sensor was characterized where resistive and capacitive change of the sensor was recorded within the valid concentration range for individual breath markers (or organic vapors). Sensor resistance was decreased and sensor capacitance was increased with the increase of vapor concentration. Now, the ratio of resistive slope (mR) and capacitive slope (mC) provided a concentration independent constant term (M) for a particular vapor. For the detection of unknown vapor, ratio of resistive change and capacitive change at any concentration was same to the previously calculated constant term (M). After successful identification of the target vapor, concentration was calculated from the straight line behavior of resistance as a function of concentration. Current technique is suitable for the detection of particular vapor from a mixture of other interfering vapors.Keywords: breath marker, vapor sensors, selective detection, TiO₂ nanotube array
Procedia PDF Downloads 1553381 Efficacy of Sparganium stoloniferum–Derived Compound in the Treatment of Acne Vulgaris: A Pilot Study
Authors: Wanvipa Thongborisute, Punyaphat Sirithanabadeekul, Pichit Suvanprakorn, Anan Jiraviroon
Abstract:
Background: Acne vulgaris is one of the most common dermatologic problems, and can have a significant psychological and physical effect on patients. Propionibacterium acnes' roles in acne vulgaris involve the activation of toll-like receptor 4 (TLR4), and toll-like receptor 2 (TLR2) pathways. By activating these pathways, inflammatory events of acne lesions, comedogenesis and sebaceous lipogenesis can occur. Currently, there are several topical agents commonly use in treating acne vulgaris that are known to have an effect on TLRs, such as retinoic acid and adapalene, but these drugs still have some irritating effects. At present, there is an alarming increase in rate of bacterial resistance due to irrational used of antibiotics both orally and topically. For this reason, acne treatments should contain bioactive molecules targeting at the site of action for the most effective therapeutic effect with the least side effects. Sparganium stoloniferumis a Chinese aquatic herb containing a compound called Sparstolonin B (SsnB), which has been reported to selectively blocks Toll-like receptor 2 (TLR2) and Toll-like receptor 4 (TLR4)-mediated inflammatory signals. Therefore, this topical TLR2 and TLR4 antagonist, in a form of Sparganium stoloniferum-derived compound containing SsnB, should give a benefit in reducing inflammation of acne vulgaris lesions and providing an alternative treatments for patients with this condition. Materials and Methods: The objectives of this randomized double blinded split faced placebo controlled trial is to study the safety and efficacy of the Sparganium stoloniferum-derived compound. 32 volunteered patients with mild to moderate degree of acne vulgaris according to global acne grading system were included in the study. After being informed and consented the subjects were given 2 topical treatments for acne vulgaris, one being topical 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) and the other, placebo. The subjects were asked to apply each treatment to either half of the face daily morning and night by randomization for 8 weeks, and come in for a weekly follow up. For each visit, the patients went through a procedure of lesion counting, including comedones, papules, nodules, pustules, and cystic lesions. Results: During 8 weeks of experimentation, the result shows a reduction in total lesions number between the placebo and the treatment side show statistical significance starting at week 4, where the 95% confidence interval begin to no longer overlap, and shows a trend of continuing to be further apart. The decrease in the amount of total lesions between week 0 and week 8 of the placebo side shows no statistical significant at P value >0.05. While the decrease in the amount of total lesions of acne vulgaris of the treatment side comparing between week 0 and week 8 shows statistical significant at P value <0.001. Conclusion: The data demonstrates that 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) is more effective in treating acne vulgaris comparing to topical placebo in treating acne vulgaris, by showing significant reduction in the total numbers of acne lesions. Therefore, this topical Sparganium stoloniferum extraction could become a potential alternative treatment for acne vulgaris.Keywords: acne vulgaris, sparganium stoloniferum, sparstolonin B, toll-like receptor 2, toll-like receptor 4
Procedia PDF Downloads 1873380 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites
Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria
Abstract:
Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles
Procedia PDF Downloads 2623379 Investigation of Effective Parameters on Water Quality of Iranian Rivers Using Hydrochemical and Statistical Methods
Authors: Maryam Sayadi, Rana Sedighpour, Hossein Rezaie
Abstract:
In this study, in order to evaluate water quality of Gamasiab and Gharehsoo rivers located in Kermanshah province, the information of a 5-year statistical period during the years 2014-2018 was used. To evaluate the hydrochemistry of water, first the type and hydrogeochemical facies of river water were determined using Stiff and Piper diagrams. Then, based on Gibbs diagram and combination diagrams, the factors controlling the chemical parameters of the two rivers were identified. Saturation indices were used to predict the possibility of dissolution and deposition of some minerals. Then, in order to classify water in different sections, fourteen water quality indicators for different uses along with WHO standard were used. Finally, factor analysis was used to determine the processes affecting the hydrochemistry of the two rivers. The results of this study showed that in both rivers, the predominant type and facies are bicarbonate of calcite. Also, the main factor in changing the chemical quality of water in both Gamasiab and Gharehsoo rivers is the water-rock reaction. According to the results of factor analysis in both rivers, two factors have the greatest impact on water quality in the region. Among the parameters of Gamasiab river in the first factor, HCO3-, Na+ and Cl-, respectively, had the highest factor loads, and in the second factor, SO42- and Mg2+ were selected as the main parameters. The parameters Ca2+, Cl- and Na have the highest factor loads in the first factor and in the second factor Mg2+ and SO42- have the highest factor loads in Gharehsoo river. The dissolution of carbonate formations due to their abundance and expansion in the two basins has a more significant effect on changing water chemistry. It has saturated the water of rivers with aragonite, calcite and dolomite. Due to the low contribution of the second factor in changing the chemical parameters, the water of both rivers is saturated with respect to evaporative minerals such as gypsum, halite and anhydrite in all stations. Based on Schoeller diagrams, Wilcox and other quality indicators in these two sections, the amount of main physicochemical parameters are in the desired range for drinking and agriculture. The results of Langelier, Ryznar, Larson-Skold and Puckorius indices showed that water is corrosive in industry.Keywords: factor analysis, hydrochemical, saturation index, surface water quality
Procedia PDF Downloads 1263378 Atomic Layer Deposition of Metal Oxide Inverse Opals: A Tailorable Platform for Unprecedented Photocatalytic Performance
Authors: Hamsasew Hankebo Lemago, Dóra Hessz, Zoltán Erdélyi, Imre Miklós Szilágyi
Abstract:
Metal oxide inverse opals are a unique class of photocatalysts with a hierarchical structure that mimics the natural opal gemstone. They are composed of a network of interconnected pores, which provides a large surface area and efficient pathways for the transport of light and reactants. Atomic layer deposition (ALD) is a versatile technique for the synthesis of high-precision metal oxide thin films, including inverse opals. ALD allows for precise control over the thickness, composition, and morphology of the synthesized films, making it an ideal technique for the fabrication of photocatalysts with tailored properties. In this study, we report the synthesis of TiO2, ZnO, and Al2O3 inverse opal photocatalysts using thermal or plasma-enhanced ALD. The synthesized photocatalysts were characterized using a variety of techniques, including scanning electron microscopy (SEM)-energy dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), Raman spectroscopy, photoluminescence (PL), ellipsometry, and UV-visible spectroscopy. The results showed that the ALD-synthesized metal oxide inverse opals had a highly ordered structure and a tunable pore size. The PL spectroscopy results showed low recombination rates of photogenerated electron-hole pairs, while the ellipsometry and UV-visible spectroscopy results showed tunable optical properties and band gap energies. The photocatalytic activity of the samples was evaluated by the degradation of methylene blue under visible light irradiation. The results showed that the ALD-synthesized metal oxide inverse opals exhibited high photocatalytic activity, even under visible light irradiation. The composites photocatalysts showed even higher activity than the individual metal oxide inverse opals. The enhanced photocatalytic activity of the composites can be attributed to the synergistic effect between the different metal oxides. For example, Al2O3 can act as a charge carrier scavenger, which can reduce the recombination of photogenerated electron-hole pairs. The ALD-synthesized metal oxide inverse opals and their composites are promising photocatalysts for a variety of applications, such as wastewater treatment, air purification, and energy production. For example, they can be used to remove organic pollutants from wastewater, decompose harmful gases in the air, and produce hydrogen fuel from water.Keywords: ALD, metal oxide inverse opals, composites, photocatalysis
Procedia PDF Downloads 843377 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1673376 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1593375 The Effect of Impinging WC-12Co Particles Temperature on Thickness of HVOF Thermally Sprayed Coatings
Authors: M. Jalali Azizpour
Abstract:
In this paper, the effect of WC-12Co particle Temperature in HVOF thermal spraying process on the coating thickness has been studied. The statistical results show that the spray distance and oxygen-to-fuel ratio are more effective factors on particle characterization and thickness of HVOF thermal spraying coatings. Spray Watch diagnostic system, scanning electron microscopy (SEM), X-ray diffraction and thickness measuring system were used for this purpose.Keywords: HVOF, temperature thickness, velocity, WC-12Co
Procedia PDF Downloads 2413374 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems
Authors: Bronwen Wade
Abstract:
Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality
Procedia PDF Downloads 543373 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob
Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez
Abstract:
World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound
Procedia PDF Downloads 2673372 Genome Sequencing, Assembly and Annotation of Gelidium Pristoides from Kenton-on-Sea, South Africa
Authors: Sandisiwe Mangali, Graeme Bradley
Abstract:
Genome is complete set of the organism's hereditary information encoded as either deoxyribonucleic acid or ribonucleic acid in most viruses. The three different types of genomes are nuclear, mitochondrial and the plastid genome and their sequences which are uncovered by genome sequencing are known as an archive for all genetic information and enable researchers to understand the composition of a genome, regulation of gene expression and also provide information on how the whole genome works. These sequences enable researchers to explore the population structure, genetic variations, and recent demographic events in threatened species. Particularly, genome sequencing refers to a process of figuring out the exact arrangement of the basic nucleotide bases of a genome and the process through which all the afore-mentioned genomes are sequenced is referred to as whole or complete genome sequencing. Gelidium pristoides is South African endemic Rhodophyta species which has been harvested in the Eastern Cape since the 1950s for its high economic value which is one motivation for its sequencing. Its endemism further motivates its sequencing for conservation biology as endemic species are more vulnerable to anthropogenic activities endangering a species. As sequencing, mapping and annotating the Gelidium pristoides genome is the aim of this study. To accomplish this aim, the genomic DNA was extracted and quantified using the Nucleospin Plank Kit, Qubit 2.0 and Nanodrop. Thereafter, the Ion Plus Fragment Library was used for preparation of a 600bp library which was then sequenced through the Ion S5 sequencing platform for two runs. The produced reads were then quality-controlled and assembled through the SPAdes assembler with default parameters and the genome assembly was quality assessed through the QUAST software. From this assembly, the plastid and the mitochondrial genomes were then sampled out using Gelidiales organellar genomes as search queries and ordered according to them using the Geneious software. The Qubit and the Nanodrop instruments revealed an A260/A280 and A230/A260 values of 1.81 and 1.52 respectively. A total of 30792074 reads were obtained and produced a total of 94140 contigs with resulted into a sequence length of 217.06 Mbp with N50 value of 3072 bp and GC content of 41.72%. A total length of 179281bp and 25734 bp was obtained for plastid and mitochondrial respectively. Genomic data allows a clear understanding of the genomic constituent of an organism and is valuable as foundation information for studies of individual genes and resolving the evolutionary relationships between organisms including Rhodophytes and other seaweeds.Keywords: Gelidium pristoides, genome, genome sequencing and assembly, Ion S5 sequencing platform
Procedia PDF Downloads 1503371 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery
Abstract:
Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.Keywords: gender, football, soccer, traumatic brain injury
Procedia PDF Downloads 1413370 Disentangling the Sources and Context of Daily Work Stress: Study Protocol of a Comprehensive Real-Time Modelling Study Using Portable Devices
Authors: Larissa Bolliger, Junoš Lukan, Mitja Lustrek, Dirk De Bacquer, Els Clays
Abstract:
Introduction and Aim: Chronic workplace stress and its health-related consequences like mental and cardiovascular diseases have been widely investigated. This project focuses on the sources and context of psychosocial daily workplace stress in a real-world setting. The main objective is to analyze and model real-time relationships between (1) psychosocial stress experiences within the natural work environment, (2) micro-level work activities and events, and (3) physiological signals and behaviors in office workers. Methods: An Ecological Momentary Assessment (EMA) protocol has been developed, partly building on machine learning techniques. Empatica® wristbands will be used for real-life detection of stress from physiological signals; micro-level activities and events at work will be based on smartphone registrations, further processed according to an automated computer algorithm. A field study including 100 office-based workers with high-level problem-solving tasks like managers and researchers will be implemented in Slovenia and Belgium (50 in each country). Data mining and state-of-the-art statistical methods – mainly multilevel statistical modelling for repeated data – will be used. Expected Results and Impact: The project findings will provide novel contributions to the field of occupational health research. While traditional assessments provide information about global perceived state of chronic stress exposure, the EMA approach is expected to bring new insights about daily fluctuating work stress experiences, especially micro-level events and activities at work that induce acute physiological stress responses. The project is therefore likely to generate further evidence on relevant stressors in a real-time working environment and hence make it possible to advise on workplace procedures and policies for reducing stress.Keywords: ecological momentary assessment, real-time, stress, work
Procedia PDF Downloads 1613369 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 1213368 The Methods of Customer Satisfaction Measurement and Its Statistical Analysis towards Sales and Logistic Activities in Food Sector
Authors: Seher Arslankaya, Bahar Uludağ
Abstract:
Meeting the needs and demands of customers and pleasing the customers are important requirements for companies in food sectors where the growth of competition is significantly unpredictable. Customer satisfaction is also one of the key concepts which is mainly driven by wide range of customer preference and expectation upon products and services introduced and delivered to them. In order to meet the customer demands, the companies that engage in food sectors are expected to have a well-managed set of Total Quality Management (TQM), which sets out to improve quality of products and services; to reduce costs and to increase customer satisfaction by restructuring traditional management practices. It aims to increase customer satisfaction by meeting (their) customer expectations and requirements. The achievement would be determined with the help of customer satisfaction surveys, which is done to obtain immediate feedback and to provide quick responses. In addition, the surveys would also assist the making of strategic planning which helps to anticipate customer future needs and expectations. Meanwhile, periodic measurement of customer satisfaction would be a must because with the better understanding of customers perceptions from the surveys (done by questioners), the companies would have a clear idea to identify their own strengths and weaknesses that help the companies keep their loyal customers; to stand in comparison toward their competitors and map out their future progress and improvement. In this study, we propose a survey based on customer satisfaction measurement method and its statistical analysis for sales and logistic activities of food firms. Customer satisfaction would be discussed in details. Furthermore, after analysing the data derived from the questionnaire that applied to customers by using the SPSS software, various results obtained from the application would be presented. By also applying ANOVA test, the study would analysis the existence of meaningful differences between customer demographic proportion and their perceptions. The purpose of this study is also to find out requirements which help to remove the effects that decrease customer satisfaction and produce loyal customers in food industry. For this purpose, the customer complaints are collected. Additionally, comments and suggestions are done according to the obtained results of surveys, which would be useful for the making-process of strategic planning in food industry.Keywords: customer satisfaction measurement and analysis, food industry, SPSS, TQM
Procedia PDF Downloads 2503367 Ranking Theory-The Paradigm Shift in Statistical Approach to the Issue of Ranking in a Sports League
Authors: E. Gouya Bozorg
Abstract:
The issue of ranking of sports teams, in particular soccer teams is of primary importance in the professional sports. However, it is still based on classical statistics and models outside of area of mathematics. Rigorous mathematics and then statistics despite the expectation held of them have not been able to effectively engage in the issue of ranking. It is something that requires serious pathology. The purpose of this study is to change the approach to get closer to mathematics proper for using in the ranking. We recommend using theoretical mathematics as a good option because it can hermeneutically obtain the theoretical concepts and criteria needful for the ranking from everyday language of a League. We have proposed a framework that puts the issue of ranking into a new space that we have applied in soccer as a case study. This is an experimental and theoretical study on the issue of ranking in a professional soccer league based on theoretical mathematics, followed by theoretical statistics. First, we showed the theoretical definition of constant number Є = 1.33 or ‘golden number’ of a soccer league. Then, we have defined the ‘efficiency of a team’ by this number and formula of μ = (Pts / (k.Є)) – 1, in which Pts is a point obtained by a team in k number of games played. Moreover, K.Є index has been used to show the theoretical median line in the league table and to compare top teams and bottom teams. Theoretical coefficient of σ= 1 / (1+ (Ptx / Ptxn)) has also been defined that in every match between the teams x, xn, with respect to the ability of a team and the points of both of them Ptx, Ptxn, and it gives a performance point resulting in a special ranking for the League. And it has been useful particularly in evaluating the performance of weaker teams. The current theory has been examined for the statistical data of 4 major European Leagues during the period of 1998-2014. Results of this study showed that the issue of ranking is dependent on appropriate theoretical indicators of a League. These indicators allowed us to find different forms of ranking of teams in a league including the ‘special table’ of a league. Furthermore, on this basis the issue of a record of team has been revised and amended. In addition, the theory of ranking can be used to compare and classify the different leagues and tournaments. Experimental results obtained from archival statistics of major professional leagues in the world in the past two decades have confirmed the theory. This topic introduces a new theory for ranking of a soccer league. Moreover, this theory can be used to compare different leagues and tournaments.Keywords: efficiency of a team, ranking, special table, theoretical mathematic
Procedia PDF Downloads 4183366 Fahr Dsease vs Fahr Syndrome in the Field of a Case Report
Authors: Angelis P. Barlampas
Abstract:
Objective: The confusion of terms is a common practice in many situations of the everyday life. But, in some circumstances, such as in medicine, the precise meaning of a word curries a critical role for the health of the patient. Fahr disease and Fahr syndrome are often falsely used interchangeably, but they are two different conditions with different physical histories of different etiology and different medical management. A case of the seldom Fahr disease is presented, and a comparison with the more common Fahr syndrome follows. Materials and method: A 72 years old patient came to the emergency department, complaining of some kind of non specific medal disturbances, like anxiety, difficulty of concentrating, and tremor. The problems had a long course, but he had the impression of getting worse lately, so he decided to check them. Past history and laboratory tests were unremarkable. Then, a computed tomography examination was ordered. Results: The CT exam showed bilateral, hyperattenuating areas of heavy, dense calcium type deposits in basal ganglia, striatum, pallidum, thalami, the dentate nucleus, and the cerebral white matter of frontal, parietal and iniac lobes, as well as small areas of the pons. Taking into account the absence of any known preexisting illness and the fact that the emergency laboratory tests were without findings, a hypothesis of the rare Fahr disease was supposed. The suspicion was confirmed with further, more specific tests, which showed the lack of any other conditions which could probably share the same radiological image. Differentiating between Fahr disease and Fahr syndrome. Fahr disease: Primarily autosomal dominant Symmetrical and bilateral intracranial calcifications The patient is healthy until the middle age Absence of biochemical abnormalities. Family history consistent with autosomal dominant Fahr syndrome :Earlier between 30 to 40 years old. Symmetrical and bilateral intracranial calcifications Endocrinopathies: Idiopathic hypoparathyroidism, secondary hypoparathyroidism, hyperparathyroidism, pseudohypoparathyroidism ,pseudopseudohypoparathyroidism, e.t.c The disease appears at any age There are abnormal laboratory or imaging findings. Conclusion: Fahr disease and Fahr syndrome are not the same illness, although this is not well known to the inexperienced doctors. As clinical radiologists, we have to inform our colleagues that a radiological image, along with the patient's history, probably implies a rare condition and not something more usual and prompt the investigation to the right route. In our case, a genetic test could be done earlier and reveal the problem, and thus avoiding unnecessary and specific tests which cost in time and are uncomfortable to the patient.Keywords: fahr disease, fahr syndrome, CT, brain calcifications
Procedia PDF Downloads 623365 Feigenbaum Universality, Chaos and Fractal Dimensions in Discrete Dynamical Systems
Authors: T. K. Dutta, K. K. Das, N. Dutta
Abstract:
The salient feature of this paper is primarily concerned with Ricker’s population model: f(x)=x e^(r(1-x/k)), where r is the control parameter and k is the carrying capacity, and some fruitful results are obtained with the following objectives: 1) Determination of bifurcation values leading to a chaotic region, 2) Development of Statistical Methods and Analysis required for the measure of Fractal dimensions, 3) Calculation of various fractal dimensions. These results also help that the invariant probability distribution on the attractor, when it exists, provides detailed information about the long-term behavior of a dynamical system. At the end, some open problems are posed for further research.Keywords: Feigenbaum universality, chaos, Lyapunov exponent, fractal dimensions
Procedia PDF Downloads 3023364 The Benefits of Regional Brand for Companies
Authors: H. Starzyczna, M. Stoklasa, K. Matusinska
Abstract:
This article deals with the benefits of regional brands for companies in the Czech Republic. Research was focused on finding out the expected and actual benefits of regional brands for companies. The data were obtained by questionnaire survey and analysed by IBM SPSS. Representative sample of 204 companies was created. The research analysis disclosed the expected benefits that the regional brand should bring to companies. But the actual benefits are much worse. The statistical testing of hypotheses revealed that the benefits depend on the region of origin, which surprised both us and the regional coordinators.Keywords: Brand, regional brands, product protective branding programs, brand benefits
Procedia PDF Downloads 3463363 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis
Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc
Abstract:
Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation
Procedia PDF Downloads 2153362 Opportunities of an Industrial City in the Leisure Tourism
Authors: E. Happ, A. Albert Tóth
Abstract:
The aim of the research is to investigate the forms of the demands of leisure tourism in a West-Hungarian industrial city, Győr. Today, Győr is still a traditional industrial city, its industry is mainly based on vehicle industry, but the role of tourism is increasing in the life of the city as well. Because of the industrial nature and the strong economy of the city, the ratio of business tourists is high. It can be stated that MICE tourism is dominating in Győr. Developments of the last decade can help the city with new tourism products to increase the leisure tourism. The new types of tourism – besides business tourism – can help the providers to increase the occupancy rates and the demand at the weekends. The research demonstrates the theoretical background of the topic, and it shows the present situation of the tourism in Győr with secondary data. The secondary research contains statistical data from the Hungarian Statistical Office and the city council, and it is based on the providers’ data. The next part of the paper shows the potential types of leisure tourism with the help of primary research. The primary research contains the results of an online questionnaire with a sample of 1000 potential customers. It is completed with 10 in-depth interviews with tourism experts, who explained their opinions about the opportunities of leisure tourism in Győr from the providers’ side. The online questionnaire was filled out in spring 2017 by customers, who have already stayed in Győr or plan to visit the city. At the same time in-depth interviews were made with hotel managers, head of touristic institutions and employees at the council. Based on the research it can be stated that the touristic supply of Győr allows the increase of the leisure tourism ratio in the city. Primarily, the cultural and health tourism show potential development, but the supply side of touristic services can be developed in order to increase the number of guest nights. The tourism marketing needs to be strengthened in the city, and a distinctive marketing activity - from other cities - is needed as well. To conclude, although Győr is an industrial city, it has a transforming industrial part, and tourism is also strongly present in its economy. Besides the leading role of business tourism, different types of leisure tourism have the opportunity to take place in the city.Keywords: business tourism, Győr, industrial city, leisure tourism, touristic demand
Procedia PDF Downloads 2793361 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 3583360 Study and Simulation of a Sever Dust Storm over West and South West of Iran
Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi
Abstract:
In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem
Procedia PDF Downloads 2713359 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 1393358 Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach
Authors: Seyed Habib A. Rahmati, Mohsen Sadegh Amalnick
Abstract:
Different terms of the statistical process control (SPC) has sketch in the fuzzy environment. However, measurement system analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works based on Buckley approach, imprecision and vagueness nature of the real world measurement are considered simultaneously. To do so, fuzzy version of the gauge capability (Cg and Cgk) are introduced. The method is also explained through example clearly.Keywords: measurement, SPC, MSA, gauge capability (Cg and Cgk)
Procedia PDF Downloads 6503357 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake
Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou
Abstract:
Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.Keywords: landsat 8, oligotrophic lake, remote sensing, water quality
Procedia PDF Downloads 396