Search results for: interval forecasts
654 Linking Temporal Changes of Climate Factors with Staple Cereal Yields in Southern Burkina Faso
Authors: Pius Borona, Cheikh Mbow, Issa Ouedraogo
Abstract:
In the Sahel, climate variability has been associated with a complex web of direct and indirect impacts. This natural phenomenon has been an impediment to agro-pastoral communities who experience uncertainty while involving in farming activities which is also their key source of livelihood. In this scenario, the role of climate variability in influencing the performance, quantity and quality of staple cereals yields, vital for food and nutrition security has been a topic of importance. This response of crops and subsequent yield variability is also a subject of immense debate due to the complexity of crop development at different stages. This complexity is further compounded by influence of slowly changing non-climatic factors. With these challenges in mind, the present paper initially explores the occurrence of climate variability at an inter annual and inter decadal level in South Burkina Faso. This is evidenced by variation of the total annual rainfall and the number of rainy days among other climatic descriptors. Further, it is shown how district-scale cereal yields in the study area including maize, sorghum and millet casually associate variably to the inter-annual variation of selected climate variables. Statistical models show that the three cereals widely depict sensitivity to the length of the growing period and total dry days in the growing season. Maize yields on the other hand relate strongly to the rainfall amount variation (R2=51.8%) showing high moisture dependence during critical growth stages. Our conclusions emphasize on adoption of efficient water utilization platforms especially those that have evidently increased yields and strengthening of forecasts dissemination.Keywords: climate variability, cereal yields, seasonality, rain fed farming, Burkina Faso, rainfall
Procedia PDF Downloads 202653 Indoor Temperature, Relative Humidity and CO₂ Level Assessment in a Publically Managed Hospital Building
Authors: Ayesha Asif, Muhammad Zeeshan
Abstract:
The sensitivity of hospital-microenvironments for all types of pollutants, due to the presence of patients with immune deficiencies, makes them complex indoor spaces. Keeping in view, this study investigated indoor air quality (IAQ) of two most sensitive places, i.e., operation theater (OT) and intensive care unit (ICU), of a publically managed hospital. Taking CO₂ concentration as air quality indicator and temperature (T) and relative humidity (RH) as thermal comfort parameters, continuous monitoring of the three variables was carried out. Measurements were recorded at an interval of 1 min for weekdays and weekends, including occupational and non-occupational hours. Outdoor T and RH measurements were also used in the analysis. Results show significant variation (p < 0.05) in CO₂, T and RH values over the day during weekdays while no significant variation (p > 0.05) have been observed during weekends of both the monitored sites. Maximum observed values of CO₂ in OT and ICU were found to be 2430 and 624 ppm, T as 24.7ºC and 28.9ºC and RH as 29.6% and 32.2% respectively.Keywords: indoor air quality, CO₂ concentration, hospital building, comfort assessment
Procedia PDF Downloads 132652 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 64651 Estimation of Stress-Strength Parameter for Burr Type XII Distribution Based on Progressive Type-II Censoring
Authors: A. M. Abd-Elfattah, M. H. Abu-Moussa
Abstract:
In this paper, the estimation of stress-strength parameter R = P(Y < X) is considered when X; Y the strength and stress respectively are two independent random variables of Burr Type XII distribution. The samples taken for X and Y are progressively censoring of type II. The maximum likelihood estimator (MLE) of R is obtained when the common parameter is unknown. But when the common parameter is known the MLE, uniformly minimum variance unbiased estimator (UMVUE) and the Bayes estimator of R = P(Y < X) are obtained. The exact condence interval of R based on MLE is obtained. The performance of the proposed estimators is compared using the computer simulation.Keywords: Burr Type XII distribution, progressive type-II censoring, stress-strength model, unbiased estimator, maximum-likelihood estimator, uniformly minimum variance unbiased estimator, confidence intervals, Bayes estimator
Procedia PDF Downloads 455650 Optimum Turbomachine Preliminary Selection for Power Regeneration in Vapor Compression Cool Production Plants
Authors: Sayyed Benyamin Alavi, Giovanni Cerri, Leila Chennaoui, Ambra Giovannelli, Stefano Mazzoni
Abstract:
Primary energy consumption and emissions of pollutants (including CO2) sustainability call to search methodologies to lower power absorption for unit of a given product. Cool production plants based on vapour compression are widely used for many applications: air conditioning, food conservation, domestic refrigerators and freezers, special industrial processes, etc. In the field of cool production, the amount of Yearly Consumed Primary Energy is enormous, thus, saving some percentage of it, leads to big worldwide impact in the energy consumption and related energy sustainability. Among various techniques to reduce power required by a Vapour Compression Cool Production Plant (VCCPP), the technique based on Power Regeneration by means of Internal Direct Cycle (IDC) will be considered in this paper. Power produced by IDC reduces power need for unit of produced Cool Power by the VCCPP. The paper contains basic concepts that lead to develop IDCs and the proposed options to use the IDC Power. Among various selections for using turbo machines, Best Economically Available Technologies (BEATs) have been explored. Based on vehicle engine turbochargers, they have been taken into consideration for this application. According to BEAT Database and similarity rules, the best turbo machine selection leads to the minimum nominal power required by VCCPP Main Compressor. Results obtained installing the prototype in “ad hoc” designed test bench will be discussed and compared with the expected performance. Forecasts for the upgrading VCCPP, various applications will be given and discussed. 4-6% saving is expected for air conditioning cooling plants and 15-22% is expected for cryogenic plants.Keywords: Refrigeration Plant, Vapour Pressure Amplifier, Compressor, Expander, Turbine, Turbomachinery Selection, Power Saving
Procedia PDF Downloads 425649 Technology Futures in Global Militaries: A Forecasting Method Using Abstraction Hierarchies
Authors: Mark Andrew
Abstract:
Geopolitical tensions are at a thirty-year high, and the pace of technological innovation is driving asymmetry in force capabilities between nation states and between non-state actors. Technology futures are a vital component of defence capability growth, and investments in technology futures need to be informed by accurate and reliable forecasts of the options for ‘systems of systems’ innovation, development, and deployment. This paper describes a method for forecasting technology futures developed through an analysis of four key systems’ development stages, namely: technology domain categorisation, scanning results examining novel systems’ signals and signs, potential system-of systems’ implications in warfare theatres, and political ramifications in terms of funding and development priorities. The method has been applied to several technology domains, including physical systems (e.g., nano weapons, loitering munitions, inflight charging, and hypersonic missiles), biological systems (e.g., molecular virus weaponry, genetic engineering, brain-computer interfaces, and trans-human augmentation), and information systems (e.g., sensor technologies supporting situation awareness, cyber-driven social attacks, and goal-specification challenges to proliferation and alliance testing). Although the current application of the method has been team-centred using paper-based rapid prototyping and iteration, the application of autonomous language models (such as GPT-3) is anticipated as a next-stage operating platform. The importance of forecasting accuracy and reliability is considered a vital element in guiding technology development to afford stronger contingencies as ideological changes are forecast to expand threats to ecology and earth systems, possibly eclipsing the traditional vulnerabilities of nation states. The early results from the method will be subjected to ground truthing using longitudinal investigation.Keywords: forecasting, technology futures, uncertainty, complexity
Procedia PDF Downloads 113648 The Use of Biofeedback to Increase Resilience and Mental Health of Supersonic Pilots
Authors: G. Kloudova, S. Kozlova, M. Stehlik
Abstract:
Pilots are operating in a high-risk environment rich in potential stressors, which negatively affect aviation safety and the mental health of pilots. In the research conducted, the pilots were offered mental training biofeedback therapy. Biofeedback is an objective tool to measure physiological responses to stress. After only six sessions, all of the pilots tested showed significant differences between their initial condition and their condition after therapy. The biggest improvement was found in decreased heart rate (in 83.3% of tested pilots) and respiration rate (66.7%), which are the best indicators of anxiety states and panic attacks. To incorporate all of the variables, we correlated the measured physiological state of the pilots with their personality traits. Surprisingly, we found a high correlation with peripheral temperature and confidence (0.98) and with heart rate and aggressiveness (0.97). A retest made after a one-year interval showed that in majority of the subjects tested their acquired self-regulation ability had been internalized.Keywords: aviation, biofeedback, mental workload, performance psychology
Procedia PDF Downloads 247647 Interval Functional Electrical Stimulation Cycling and Nutritional Counseling Improves Lean Mass to Fat Mass Ratio and Decreases Cardiometabolic Disease Risk in Individuals with Spinal Cord Injury
Authors: David Dolbow, Daniel Credeur, Mujtaba Rahimi, Dobrivoje Stokic, Jennifer Lemacks, Andrew Courtner
Abstract:
Introduction: Obesity is at epidemic proportions in the spinal cord injury (SCI) population (66-75%), as individuals who suffer from paralysis undergo a dramatic decrease in muscle mass and a dramatic increase in adipose deposition. Obesity is a major public health concern which includes a doubling of the risk of heart disease, stroke and type II diabetes mellitus. It has been demonstrated that physical activity, and especially HIIT, can promote a healthy body composition and decrease the risk cardiometabolic disease in the able-bodied population. However, SCI typically limits voluntary exercise to the arms, but a high prevalence of shoulder pain in persons with chronic SCI (60-90%) can cause increased arm exercise to be problematic. Functional electrical stimulation (FES) cycling has proven to be a safe and effective way to exercise paralyzed leg muscles in clinical and home settings, saving the often overworked arms. Yet, HIIT-FES cycling had not been investigated prior to the current study. The purpose of this study was to investigate the body composition changes with combined HIIT-FES cycling and nutritional counseling on individuals with SCI. Design: A matched (level of injury, time since injury, body mass index) and controlled trail. Setting: University exercise performance laboratory. Subjects: Ten individuals with chronic SCI (C5-T9) ASIA impairment classification (A & B) were divided into the treatment group (n=5) for 30 minutes of HIIT-FES cycling 3 times per week for 8 weeks and nutritional counseling over the phone for 30 minutes once per week for 8 weeks and the control group (n=5) who received nutritional counseling only. Results: There was a statistically significant difference between the HIIT-FES group and the control group in mean body fat percentage change (-1.14 to +0.24) respectively, p = .030). There was also a statistically significant difference between the HIIT-FES and control groups in mean change in legs lean mass (+0.78 kg to -1.5 kg) respectively, p = 0.004. There was a nominal decrease in weight, BMI, total fat mass and a nominal increase in total lean mass for the HIIT-FES group over the control group. However, these changes were not found to be statistically significant. Additionally, there was a nominal decrease in the mean blood glucose levels for both groups 101.8 to 97.8 mg/dl for the HIIT-FES group and 94.6 to 93 mg/dl for the Nutrition only group, however, neither were found to be statistically significant. Conclusion: HIIT-FES cycling combined with nutritional counseling can provide healthful body composition changes including decreased body fat percentage in just 8 weeks. Future study recommendations include a greater number of participants, a primer electrical stimulation exercise program to better ready participants for HIIT-FES cycling and a greater volume of training above 30 minutes, 3 times per week for 8 weeks.Keywords: body composition, functional electrical stimulation cycling, high-intensity interval training, spinal cord injury
Procedia PDF Downloads 115646 Bubbling in Gas Solids Fluidization at a Strouhal Number Tuned for Low Energy Dissipation
Authors: Chenxi Zhang, Weizhong Qian, Fei Wei
Abstract:
Gas solids multiphase flow is common in many engineering and environmental applications. Turbulence and multiphase flows are two of the most challenging topics in fluid mechanics, and when combined they pose a formidable challenge, even in the dilute dispersed regime. Dimensionless numbers are important in mechanics because their constancy can imply dynamic similarity between systems, despite possible differences in medium or scale. In the fluid mechanics literature, the Strouhal number is usually associated with the dimensionless shedding frequency of a von Karman wake; here we introduce this dimensionless number to investigate bubbling in gas solids fluidization. St=fA/U, which divides stroke frequency (f) and amplitude (A) by forward speed (U). The bubble behavior in a large two-dimensional bubbling fluidized bed (500mm×30mm×6000mm) is investigated. Our result indicates that propulsive efficiency is high and energy dissipation is low over a narrow range of St and usually within the interval 0.2645 Economic Development Impacts of Connected and Automated Vehicles (CAV)
Authors: Rimon Rafiah
Abstract:
This paper will present a combination of two seemingly unrelated models, which are the one for estimating economic development impacts as a result of transportation investment and the other for increasing CAV penetration in order to reduce congestion. Measuring economic development impacts resulting from transportation investments is becoming more recognized around the world. Examples include the UK’s Wider Economic Benefits (WEB) model, Economic Impact Assessments in the USA, various input-output models, and additional models around the world. The economic impact model is based on WEB and is based on the following premise: investments in transportation will reduce the cost of personal travel, enabling firms to be more competitive, creating additional throughput (the same road allows more people to travel), and reducing the cost of travel of workers to a new workplace. This reduction in travel costs was estimated in out-of-pocket terms in a given localized area and was then translated into additional employment based on regional labor supply elasticity. This additional employment was conservatively assumed to be at minimum wage levels, translated into GDP terms, and from there into direct taxation (i.e., an increase in tax taken by the government). The CAV model is based on economic principles such as CAV usage, supply, and demand. Usage of CAVs can increase capacity using a variety of means – increased automation (known as Level I thru Level IV) and also by increased penetration and usage, which has been predicted to go up to 50% by 2030 according to several forecasts, with possible full conversion by 2045-2050. Several countries have passed policies and/or legislation on sales of gasoline-powered vehicles (none) starting in 2030 and later. Supply was measured via increased capacity on given infrastructure as a function of both CAV penetration and implemented technologies. The CAV model, as implemented in the USA, has shown significant savings in travel time and also in vehicle operating costs, which can be translated into economic development impacts in terms of job creation, GDP growth and salaries as well. The models have policy implications as well and can be adapted for use in Japan as well.Keywords: CAV, economic development, WEB, transport economics
Procedia PDF Downloads 72644 Advanced Real-Time Fluorescence Imaging System for Rat's Femoral Vein Thrombosis Monitoring
Authors: Sang Hun Park, Chul Gyu Song
Abstract:
Artery and vein occlusion changes observed in patients and experimental animals are unexplainable symptoms. As the fat accumulated in cardiovascular ruptures, it causes vascular blocking. Likewise, early detection of cardiovascular disease can be useful for treatment. In this study, we used the mouse femoral occlusion model to observe the arterial and venous occlusion changes without darkroom. We observed the femoral arterial flow pattern changes by proposed fluorescent imaging system using an animal model of thrombosis. We adjusted the near-infrared light source current in order to control the intensity of the fluorescent substance light. We got the clear fluorescent images and femoral artery flow pattern were measured by a 5-minute interval. The result showed that the fluorescent substance flowing in the femoral arteries were accumulated in thrombus as time passed, and the fluorescence of other vessels gradually decreased.Keywords: thrombus, fluorescence, femoral, arteries
Procedia PDF Downloads 342643 On One New Solving Approach of the Plane Mixed Problem for an Elastic Semistrip
Authors: Natalia D. Vaysfel’d, Zinaida Y. Zhuravlova
Abstract:
The loaded plane elastic semistrip, the lateral boundaries of which are fixed, is considered. The integral transformations are applied directly to Lame’s equations. It leads to one dimensional boundary value problem in the transformations’ domain which is formulated as a vector one. With the help of the matrix differential calculation’s apparatus and apparatus of Green matrix function the exact solution of a vector problem is constructed. After the satisfying the boundary condition at the semi strip’s edge the problem is reduced to the solving of the integral singular equation with regard of the unknown stress at the semis trip’s edge. The equation is solved with the orthogonal polynomials method that takes into consideration the real singularities of the solution at the ends of integration interval. The normal stress at the edge of the semis trip were calculated and analyzed.Keywords: semi strip, Green's Matrix, fourier transformation, orthogonal polynomials method
Procedia PDF Downloads 430642 Preparation and Characterization of Electrospun CdTe Quantum Dots / Nylon-6 Nanofiber Mat
Authors: Negar Mesgara, Laleh Maleknia
Abstract:
In this paper, electrospun CdTe quantum dot / nylon-6 nanofiber mats were successfully prepared. The nanofiber mats were characterized by FE-SEM, XRD and EDX analyses. The results revealed that fibers in different distinct sizes (nano and subnano scale) were obtained with the electrospinning parameters. The phenomenon of ‘on ‘ and ‘off ‘ luminescence intermittency (blinking) of CdTe QDs in nylon-6 was investigated by single-molecule optical microscopy, and we identified that the intermittencies of single QDs were correlated with the interaction of water molecules absorbed on the QD surface. The ‘off’ times, the interval between adjacent ‘on’ states, remained essentially unaffected with an increase in excitation intensity. In the case of ‘on’ time distribution, power law behavior with an exponential cutoff tail is observed at longer time scales. These observations indicate that the luminescence blinking statistics of water-soluble single CdTe QDs is significantly dependent on the aqueous environment, which is interpreted in terms of passivation of the surface trap states of QDs.Keywords: electrospinning, CdTe quantum dots, Nylon-6, Nanocomposite
Procedia PDF Downloads 433641 Industrial Assessment of the Exposed Rocks on Peris Anticline Kurdistan Region of Iraq for Cement Industry
Authors: Faroojan Khajeek Sisak Siakian, Aayda Dikran Abdulahad
Abstract:
The Peris Mountain is one of the main mountains in the Iraqi Kurdistan Region, it forms one of the long anticlines trending almost East – West. The exposed formations on the top of the mountain are Bekhme, and Shiranish, with carbonate rocks of different types and thicknesses. We selected the site for sampling to be relevant for a quarry taking into consideration the thickness of the exposed rocks, no overburden, favorable quarrying faces, hardness of the rocks, bedding nature, good extension of the outcrops, and a favorable place for construction of a cement plant. We sampled the exposed rocks on the top of the mountain where a road crosses the mountain, and a total of 15 samples were collected. The distance between sampling intervals was 5 m, and each sample was collected to represent the sampling interval. The samples were subjected to X-ray fluorescence spectroscopy (XRF) to indicate the main oxides percentages in each sample. The acquired results showed the studied rocks can be used in the cement industry.Keywords: limestone, quarry, CaO, MgO, overburden
Procedia PDF Downloads 87640 A New Concept for Deriving the Expected Value of Fuzzy Random Variables
Authors: Liang-Hsuan Chen, Chia-Jung Chang
Abstract:
Fuzzy random variables have been introduced as an imprecise concept of numeric values for characterizing the imprecise knowledge. The descriptive parameters can be used to describe the primary features of a set of fuzzy random observations. In fuzzy environments, the expected values are usually represented as fuzzy-valued, interval-valued or numeric-valued descriptive parameters using various metrics. Instead of the concept of area metric that is usually adopted in the relevant studies, the numeric expected value is proposed by the concept of distance metric in this study based on two characters (fuzziness and randomness) of FRVs. Comparing with the existing measures, although the results show that the proposed numeric expected value is same with those using the different metric, if only triangular membership functions are used. However, the proposed approach has the advantages of intuitiveness and computational efficiency, when the membership functions are not triangular types. An example with three datasets is provided for verifying the proposed approach.Keywords: fuzzy random variables, distance measure, expected value, descriptive parameters
Procedia PDF Downloads 342639 Fractal Analysis of Polyacrylamide-Graphene Oxide Composite Gels
Authors: Gülşen Akın Evingür, Önder Pekcan
Abstract:
The fractal analysis is a bridge between the microstructure and macroscopic properties of gels. Fractal structure is usually provided to define the complexity of crosslinked molecules. The complexity in gel systems is described by the fractal dimension (Df). In this study, polyacrylamide- graphene oxide (GO) composite gels were prepared by free radical crosslinking copolymerization. The fractal analysis of polyacrylamide- graphene oxide (GO) composite gels were analyzed in various GO contents during gelation and were investigated by using Fluorescence Technique. The analysis was applied to estimate Df s of the composite gels. Fractal dimension of the polymer composite gels were estimated based on the power law exponent values using scaling models. In addition, here we aimed to present the geometrical distribution of GO during gelation. And we observed that as gelation proceeded GO plates first organized themselves into 3D percolation cluster with Df=2.52, then goes to diffusion limited clusters with Df =1.4 and then lines up to Von Koch curve with random interval with Df=1.14. Here, our goal is to try to interpret the low conductivity and/or broad forbidden gap of GO doped PAAm gels, by the distribution of GO in the final form of the produced gel.Keywords: composite gels, fluorescence, fractal, scaling
Procedia PDF Downloads 306638 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience
Authors: Nkwenti Mbelli Njah
Abstract:
This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners
Procedia PDF Downloads 163637 Thermal Degradation Kinetics of Field-Dried and Pelletized Switchgrass
Authors: Karen E. Supan
Abstract:
Thermal degradation kinetics of switchgrass (Panicum virgatum) from the field, as well as in a pellet form, are presented. Thermogravimetric analysis tests were performed at heating rates of 10-40 K min⁻¹ in an inert atmosphere. The activation energy and the pre-exponential factor were calculated using the Ozawa/Flynn/Wall method as suggested by the ASTM Standard Test Method for Decomposition Kinetics by Thermogravimetry. Four stages were seen in the degradation: dehydration, active pyrolysis of hemicellulose, active pyrolysis of cellulose, and passive pyrolysis. The derivative mass loss peak for active pyrolysis of cellulose in the field-dried sample was much higher than the pelletized. The range of activation energy in the 0.15 – 0.70 conversion interval was 191 – 242 kJ mol⁻¹ for the field-dried and 130-192 kJ mol⁻¹ for the pellets. The highest activation energies were achieved at 0.50 conversion and were 242 kJ mol⁻¹ and 192 kJ mol⁻¹ for the field-dried and pellets, respectively. The thermal degradation and activation energies were comparable to switchgrass and other biomass reported in the literature.Keywords: biomass, switchgrass, thermal degradation, thermogravimetric analysis
Procedia PDF Downloads 114636 Extractive Desulfurization of Atmospheric Gasoil with N,N-Dimethylformamide
Authors: Kahina Bedda, Boudjema Hamada
Abstract:
Environmental regulations have been introduced in many countries around the world to reduce the sulfur content of diesel fuel to ultra low levels with the intention of lowering diesel engine’s harmful exhaust emissions and improving air quality. Removal of sulfur containing compounds from diesel feedstocks to produce ultra low sulfur diesel fuel by extraction with selective solvents has received increasing attention in recent years. This is because the sulfur extraction technologies compared to the hydrotreating processes could reduce the cost of desulfurization substantially since they do not demand hydrogen, and are carried out at atmospheric pressure. In this work, the desulfurization of distillate gasoil by liquid-liquid extraction with N, N-dimethylformamide was investigated. This fraction was recovered from a mixture of Hassi Messaoud crude oils and Hassi R'Mel gas-condensate in Algiers refinery. The sulfur content of this cut is 281 ppm. Experiments were performed in six-stage with a ratio of solvent:feed equal to 3:1. The effect of the extraction temperature was investigated in the interval 30 ÷ 110°C. At 110°C the yield of refined gas oil was 82% and its sulfur content was 69 ppm.Keywords: desulfurization, gasoil, N, N-dimethylformamide, sulfur content
Procedia PDF Downloads 384635 Decomposing the Socio-Economic Inequalities in Utilization of Antenatal Care in South Asian Countries: Insight from Demographic and Health Survey
Authors: Jeetendra Yadav, Geetha Menon, Anita Pal, Rajkumar Verma
Abstract:
Even after encouraging maternal and child wellness programs at worldwide level, lower-middle income nations are not reached the goal set by the UN yet. This study quantified the contribution of socioeconomic determinants of inequality to the utilization of Antenatal Care in South Asian Countries. This study used data from Demographic Health Survey (DHS) of the selected countries were used, and Oaxaca decomposing were applied for socioeconomic inequalities in utilization of antenatal care. Finding from the multivariate analysis shows that mother’s age at the time of birth, birth order and interval, mother’s education, mass media exposure and economic status were significant determinants of the utilization of antenatal care services in South Asian countries. Considering, concentration index curve, the line of equity was greatest in Pakistan which followed by India and Nepal.Keywords: antenatal care, decomposition, inequalities, South Asian countries
Procedia PDF Downloads 180634 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 124633 Study on Sharp V-Notch Problem under Dynamic Loading Condition Using Symplectic Analytical Singular Element
Authors: Xiaofei Hu, Zhiyu Cai, Weian Yao
Abstract:
V-notch problem under dynamic loading condition is considered in this paper. In the time domain, the precise time domain expanding algorithm is employed, in which a self-adaptive technique is carried out to improve computing accuracy. By expanding variables in each time interval, the recursive finite element formulas are derived. In the space domain, a Symplectic Analytical Singular Element (SASE) for V-notch problem is constructed addressing the stress singularity of the notch tip. Combining with the conventional finite elements, the proposed SASE can be used to solve the dynamic stress intensity factors (DSIFs) in a simple way. Numerical results show that the proposed SASE for V-notch problem subjected to dynamic loading condition is effective and efficient.Keywords: V-notch, dynamic stress intensity factor, finite element method, precise time domain expanding algorithm
Procedia PDF Downloads 171632 Estimating Precipitable Water Vapour Using the Global Positioning System and Radio Occultation over Ethiopian Regions
Authors: Asmamaw Yehun, Tsegaye Gogie, Martin Vermeer, Addisu Hunegnaw
Abstract:
The Global Positioning System (GPS) is a space-based radio positioning system, which is capable of providing continuous position, velocity, and time information to users anywhere on or near the surface of the Earth. The main objective of this work was to estimate the integrated precipitable water vapour (IPWV) using ground GPS and Low Earth Orbit (LEO) Radio Occultation (RO) to study spatial-temporal variability. For LEO-GPS RO, we used Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) datasets. We estimated the daily and monthly mean of IPWV using six selected ground-based GPS stations over a period of range from 2012 to 2016 (i.e. five-years period). The main perspective for selecting the range period from 2012 to 2016 is that, continuous data were available during these periods at all Ethiopian GPS stations. We studied temporal, seasonal, diurnal, and vertical variations of precipitable water vapour using GPS observables extracted from the precise geodetic GAMIT-GLOBK software package. Finally, we determined the cross-correlation of our GPS-derived IPWV values with those of the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40 Interim reanalysis and of the second generation National Oceanic and Atmospheric Administration (NOAA) model ensemble Forecast System Reforecast (GEFS/R) for validation and static comparison. There are higher values of the IPWV range from 30 to 37.5 millimetres (mm) in Gambela and Southern Regions of Ethiopia. Some parts of Tigray, Amhara, and Oromia regions had low IPWV ranges from 8.62 to 15.27 mm. The correlation coefficient between GPS-derived IPWV with ECMWF and GEFS/R exceeds 90%. We conclude that there are highly temporal, seasonal, diurnal, and vertical variations of precipitable water vapour in the study area.Keywords: GNSS, radio occultation, atmosphere, precipitable water vapour
Procedia PDF Downloads 84631 Software Verification of Systematic Resampling for Optimization of Particle Filters
Authors: Osiris Terry, Kenneth Hopkinson, Laura Humphrey
Abstract:
Systematic resampling is the most popularly used resampling method in particle filters. This paper seeks to further the understanding of systematic resampling by defining a formula made up of variables from the sampling equation and the particle weights. The formula is then verified via SPARK, a software verification language. The verified systematic resampling formula states that the minimum/maximum number of possible samples taken of a particle is equal to the floor/ceiling value of particle weight divided by the sampling interval, respectively. This allows for the creation of a randomness spectrum that each resampling method can fall within. Methods on the lower end, e.g., systematic resampling, have less randomness and, thus, are quicker to reach an estimate. Although lower randomness allows for error by having a larger bias towards the size of the weight, having this bias creates vulnerabilities to the noise in the environment, e.g., jamming. Conclusively, this is the first step in characterizing each resampling method. This will allow target-tracking engineers to pick the best resampling method for their environment instead of choosing the most popularly used one.Keywords: SPARK, software verification, resampling, systematic resampling, particle filter, tracking
Procedia PDF Downloads 82630 Standard and Processing of Photodegradable Polyethylene
Authors: Nurul-Akidah M. Yusak, Rahmah Mohamed, Noor Zuhaira Abd Aziz
Abstract:
The introduction of degradable plastic materials into agricultural sectors has represented a promising alternative to promote green agriculture and environmental friendly of modern farming practices. Major challenges of developing degradable agricultural films are to identify the most feasible types of degradation mechanisms, composition of degradable polymers and related processing techniques. The incorrect choice of degradable mechanisms to be applied during the degradation process will cause premature losses of mechanical performance and strength. In order to achieve controlled process of agricultural film degradation, the compositions of degradable agricultural film also important in order to stimulate degradation reaction at required interval of time and to achieve sustainability of the modern agricultural practices. A set of photodegradable polyethylene based agricultural film was developed and produced, following the selective optimization of processing parameters of the agricultural film manufacturing system. Example of agricultural films application for oil palm seedlings cultivation is presented.Keywords: photodegradable polyethylene, plasticulture, processing schemes
Procedia PDF Downloads 516629 Effect of Recreational Soccer on Health Indices and Diseases Prevention
Authors: Avinash Kharel
Abstract:
Recreational soccer (RS) as a medium of small-sided soccer game (SSG) has an immense positive effect on physical health, mental health and wellbeing. The RS has reflected both acute responses and long-term effects of training on sedentary, trained and clinical population on any age, gender or health status. The enjoyable mode of training elicits greater adherence by optimising intrinsic motivation while offering health benefits that match those achieved by treadmill and cycle ergometer programmes both as continuous and interval forms of training. Additionally, recreational soccer is effective and efficient regimens with highlighted social, motivational and competitive components overcoming the barriers such as cost-efficiency, time-efficiency, assess to facilities and intrinsic motivation. Further, it can be applied as an effective broad-spectrum non-pharmacological treatment of lifestyle diseases producing a positive physiological response in healthy subjects, patients and elderly people regardless of age, gender or training experience.Keywords: recreational soccer, health benefits, diseases prevention, physiology
Procedia PDF Downloads 87628 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems
Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo
Abstract:
Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic
Procedia PDF Downloads 137627 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error
Authors: R. Sabre, W. Horrigue, J. C. Simon
Abstract:
This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.Keywords: spectral density, stable processes, aliasing, periodogram
Procedia PDF Downloads 136626 Data Analytics of Electronic Medical Records Shows an Age-Related Differences in Diagnosis of Coronary Artery Disease
Authors: Maryam Panahiazar, Andrew M. Bishara, Yorick Chern, Roohallah Alizadehsani, Dexter Hadleye, Ramin E. Beygui
Abstract:
Early detection plays a crucial role in enhancing the outcome for a patient with coronary artery disease (CAD). We utilized a big data analytics platform on ~23,000 patients with CAD from a total of 960,129 UCSF patients in 8 years. We traced the patients from their first encounter with a physician to diagnose and treat CAD. Characteristics such as demographic information, comorbidities, vital, lab tests, medications, and procedures are included. There are statistically significant gender-based differences in patients younger than 60 years old from the time of the first physician encounter to coronary artery bypass grafting (CABG) with a p-value=0.03. There are no significant differences between the patients between 60 and 80 years old (p-value=0.8) and older than 80 (p-value=0.4) with a 95% confidence interval. This recognition would affect significant changes in the guideline for referral of the patients for diagnostic tests expeditiously to improve the outcome by avoiding the delay in treatment.Keywords: electronic medical records, coronary artery disease, data analytics, young women
Procedia PDF Downloads 147625 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 109