Search results for: regression coefficients
3090 Operation Cycle Model of ASz62IR Radial Aircraft Engine
Authors: M. Duk, L. Grabowski, P. Magryta
Abstract:
Today's very important element relating to air transport is the environment impact issues. Nowadays there are no emissions standards for turbine and piston engines used in air transport. However, it should be noticed that the environmental effect in the form of exhaust gases from aircraft engines should be as small as possible. For this purpose, R&D centers often use special software to simulate and to estimate the negative effect of engine working process. For cooperation between the Lublin University of Technology and the Polish aviation company WSK "PZL-KALISZ" S.A., to achieve more effective operation of the ASz62IR engine, one of such tools have been used. The AVL Boost software allows to perform 1D simulations of combustion process of piston engines. ASz62IR is a nine-cylinder aircraft engine in a radial configuration. In order to analyze the impact of its working process on the environment, the mathematical model in the AVL Boost software have been made. This model contains, among others, model of the operation cycle of the cylinders. This model was based on a volume change in combustion chamber according to the reciprocating movement of a piston. The simplifications that all of the pistons move identically was assumed. The changes in cylinder volume during an operating cycle were specified. Those changes were important to determine the energy balance of a cylinder in an internal combustion engine which is fundamental for a model of the operating cycle. The calculations for cylinder thermodynamic state were based on the first law of thermodynamics. The change in the mass in the cylinder was calculated from the sum of inflowing and outflowing masses including: cylinder internal energy, heat from the fuel, heat losses, mass in cylinder, cylinder pressure and volume, blowdown enthalpy, evaporation heat etc. The model assumed that the amount of heat released in combustion process was calculated from the pace of combustion, using Vibe model. For gas exchange, it was also important to consider heat transfer in inlet and outlet channels because of much higher values there than for flow in a straight pipe. This results from high values of heat exchange coefficients and temperature coefficients near valves and valve seats. A Zapf modified model of heat exchange was used. To use the model with the flight scenarios, the impact of flight altitude on engine performance has been analyze. It was assumed that the pressure and temperature at the inlet and outlet correspond to the values resulting from the model for International Standard Atmosphere (ISA). Comparing this model of operation cycle with the others submodels of the ASz62IR engine, it could be noticed, that a full analysis of the performance of the engine, according to the ISA conditions, can be made. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, underKeywords: aviation propulsion, AVL Boost, engine model, operation cycle, aircraft engine
Procedia PDF Downloads 2913089 Stability Analysis of Two-delay Differential Equation for Parkinson's Disease Models with Positive Feedback
Authors: M. A. Sohaly, M. A. Elfouly
Abstract:
Parkinson's disease (PD) is a heterogeneous movement disorder that often appears in the elderly. PD is induced by a loss of dopamine secretion. Some drugs increase the secretion of dopamine. In this paper, we will simply study the stability of PD models as a nonlinear delay differential equation. After a period of taking drugs, these act as positive feedback and increase the tremors of patients, and then, the differential equation has positive coefficients and the system is unstable under these conditions. We will present a set of suggested modifications to make the system more compatible with the biodynamic system. When giving a set of numerical examples, this research paper is concerned with the mathematical analysis, and no clinical data have been used.Keywords: Parkinson's disease, stability, simulation, two delay differential equation
Procedia PDF Downloads 1283088 Spatial Differentiation Patterns and Influencing Mechanism of Urban Greening in China: Based on Data of 289 Cities
Authors: Fangzheng Li, Xiong Li
Abstract:
Significant differences in urban greening have occurred in Chinese cities, which accompanied with China's rapid urbanization. However, few studies focused on the spatial differentiation of urban greening in China with large amounts of data. The spatial differentiation pattern, spatial correlation characteristics and the distribution shape of urban green space ratio, urban green coverage rate and public green area per capita were calculated and analyzed, using Global and Local Moran's I using data from 289 cities in 2014. We employed Spatial Lag Model and Spatial Error Model to assess the impacts of urbanization process on urban greening of China. Then we used Geographically Weighted Regression to estimate the spatial variations of the impacts. The results showed: 1. a significant spatial dependence and heterogeneity existed in urban greening values, and the differentiation patterns were featured by the administrative grade and the spatial agglomeration simultaneously; 2. it revealed that urbanization has a negative correlation with urban greening in Chinese cities. Among the indices, the the proportion of secondary industry, urbanization rate, population and the scale of urban land use has significant negative correlation with the urban greening of China. Automobile density and per capita Gross Domestic Product has no significant impact. The results of GWR modeling showed that the relationship between urbanization and urban greening was not constant in space. Further, the local parameter estimates suggested significant spatial variation in the impacts of various urbanization factors on urban greening.Keywords: China’s urbanization, geographically weighted regression, spatial differentiation pattern, urban greening
Procedia PDF Downloads 4593087 The Effect of Multi-Stakeholder Extension Services towards Crop Choice and Farmer's Income, the Case of the Arc High Value Crop Programme
Authors: Joseph Sello Kau, Elias Mashayamombe, Brian Washington Madinkana, Cynthia Ngwane
Abstract:
This paper presents the results for the statistical (stepwise linear regression and multiple regression) analyses, carried out on a number of crops in order to evaluate how the decision for crop choice affect the level of farm income generated by the farmers participating in the High Value Crop production (referred to as the HVC). The goal of the HVC is to encourage farmers cultivate fruit crops. The farmers received planting material from different extension agencies, together with other complementary packages such as fertilizer, garden tools, water tanks etc. During the surveys, it was discovered that a significant number of farmers were cultivating traditional crops even when their plot sizes were small. Traditional crops are competing for resources with high value crops. The results of the analyses show that farmers cultivating fruit crops, maize and potatoes were generating high income than those cultivating spinach and cabbage. High farm income is associated with plot size, access to social grants and gender. Choice for a crop is influenced by the availability of planting material and the market potential for the crop. Extension agencies providing the planting materials stand a good chance of having farmers follow their directives. As a recommendation, for the farmers to cultivate more of the HVCs, the ARC must intensify provision of fruit trees.Keywords: farm income, nature of extension services, type of crops cultivated, fruit crops, cabbage, maize, potato and spinach
Procedia PDF Downloads 3223086 A Spatial Perspective on the Metallized Combustion Aspect of Rockets
Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra
Abstract:
Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations
Procedia PDF Downloads 1753085 Machine Learning Approach for Stress Detection Using Wireless Physical Activity Tracker
Authors: B. Padmaja, V. V. Rama Prasad, K. V. N. Sunitha, E. Krishna Rao Patro
Abstract:
Stress is a psychological condition that reduces the quality of sleep and affects every facet of life. Constant exposure to stress is detrimental not only for mind but also body. Nevertheless, to cope with stress, one should first identify it. This paper provides an effective method for the cognitive stress level detection by using data provided from a physical activity tracker device Fitbit. This device gathers people’s daily activities of food, weight, sleep, heart rate, and physical activities. In this paper, four major stressors like physical activities, sleep patterns, working hours and change in heart rate are used to assess the stress levels of individuals. The main motive of this system is to use machine learning approach in stress detection with the help of Smartphone sensor technology. Individually, the effect of each stressor is evaluated using logistic regression and then combined model is built and assessed using variants of ordinal logistic regression models like logit, probit and complementary log-log. Then the quality of each model is evaluated using Akaike Information Criterion (AIC) and probit is assessed as the more suitable model for our dataset. This system is experimented and evaluated in a real time environment by taking data from adults working in IT and other sectors in India. The novelty of this work lies in the fact that stress detection system should be less invasive as possible for the users.Keywords: physical activity tracker, sleep pattern, working hours, heart rate, smartphone sensor
Procedia PDF Downloads 2553084 Optimization of Biomass Production and Lipid Formation from Chlorococcum sp. Cultivation on Dairy and Paper-Pulp Wastewater
Authors: Emmanuel C. Ngerem
Abstract:
The ever-increasing depletion of the dominant global form of energy (fossil fuels) calls for the development of sustainable and green alternative energy sources such as bioethanol, biohydrogen, and biodiesel. The production of the major biofuels relies on biomass feedstocks that are mainly derived from edible food crops and some inedible plants. One suitable feedstock with great potential as raw material for biofuel production is microalgal biomass. Despite the tremendous attributes of microalgae as a source of biofuel, their cultivation requires huge volumes of freshwater, thus posing a serious threat to commercial-scale production and utilization of algal biomass. In this study, a multi-media wastewater mixture for microalgae growth was formulated and optimized. Moreover, the obtained microalgae biomass was pre-treated to reduce sugar recovery and was compared with previous studies on microalgae biomass pre-treatment. The formulated and optimized mixed wastewater media for biomass and lipid accumulation was established using the simplex lattice mixture design. Based on the superposition approach of the potential results, numerical optimization was conducted, followed by the analysis of biomass concentration and lipid accumulation. The coefficients of regression (R²) of 0.91 and 0.98 were obtained for biomass concentration and lipid accumulation models, respectively. The developed optimization model predicted optimal biomass concentration and lipid accumulation of 1.17 g/L and 0.39 g/g, respectively. It suggested 64.69% dairy wastewater (DWW) and 35.31% paper and pulp wastewater (PWW) mixture for biomass concentration, 34.21% DWW, and 65.79% PWW for lipid accumulation. Experimental validation generated 0.94 g/L and 0.39 g/g of biomass concentration and lipid accumulation, respectively. The obtained microalgae biomass was pre-treated, enzymatically hydrolysed, and subsequently assessed for reducing sugars. The optimization of microwave pre-treatment of Chlorococcum sp. was achieved using response surface methodology (RSM). Microwave power (100 – 700 W), pre-treatment time (1 – 7 min), and acid-liquid ratio (1 – 5%) were selected as independent variables for RSM optimization. The optimum conditions were achieved at microwave power, pre-treatment time, and acid-liquid ratio of 700 W, 7 min, and 32.33:1, respectively. These conditions provided the highest amount of reducing sugars at 10.73 g/L. Process optimization predicted reducing sugar yields of 11.14 g/L on microwave-assisted pre-treatment of 2.52% HCl for 4.06 min at 700 watts. Experimental validation yielded reducing sugars of 15.67 g/L. These findings demonstrate that dairy wastewater and paper and pulp wastewater that could pose a serious environmental nuisance. They could be blended to form a suitable microalgae growth media, consolidating the potency of microalgae as a viable feedstock for fermentable sugars. Also, the outcome of this study supports the microalgal wastewater biorefinery concept, where wastewater remediation is coupled with bioenergy production.Keywords: wastewater cultivation, mixture design, lipid, biomass, nutrient removal, microwave, Chlorococcum, raceway pond, fermentable sugar, modelling, optimization
Procedia PDF Downloads 383083 Organic Farming Profitability: Evidence from South Korea
Authors: Saem Lee, Thanh Nguyen, Hio-Jung Shin, Thomas Koellner
Abstract:
Land-use management has an influence on the provision of ecosystem service in dynamic, agricultural landscapes. Agricultural land use is important for maintaining the productivity and sustainability of agricultural ecosystems. However, in Korea, intensive farming activities in this highland agricultural zone, the upper stream of Soyang has led to contaminated soil caused by over-use pesticides and fertilizers. This has led to decrease in water and soil quality, which has consequences for ecosystem services and human wellbeing. Conventional farming has still high percentage in this area and there is no special measure to prevent low water quality caused by farming activities. Therefore, the adoption of environmentally friendly farming has been considered one of the alternatives that lead to improved water quality and increase in biomass production. Concurrently, farm households with environmentally friendly farming have occupied still low rates. Therefore, our research involved a farm household survey spanning conventional farming, the farm in transition and organic farming in Soyang watershed. Another purpose of our research was to compare economic advantage of the farmers adopting environmentally friendly farming and non-adaptors and to investigate the different factors by logistic regression analysis with socio-economic and benefit-cost ratio variables. The results found that farmers with environmentally friendly farming tended to be younger than conventional farming and farmer in transition. They are similar in terms of gender which was predominately male. Farmers with environmentally friendly farming were more educated and had less farming experience than conventional farming and farmer in transition. Based on the benefit-cost analysis, total costs that farm in transition farmers spent for one year are about two times as much as the sum of costs in environmentally friendly farming. The benefit of organic farmers was assessed with 2,800 KRW per household per year. In logistic regression, the factors having statistical significance are subsidy and district, residence period and benefit-cost ratio. And district and residence period have the negative impact on the practice of environmentally friendly farming techniques. The results of our research make a valuable contribution to provide important information to describe Korean policy-making for agricultural and water management and to consider potential approaches to policy that would substantiate ways beneficial for sustainable resource management.Keywords: organic farming, logistic regression, profitability, agricultural land-use
Procedia PDF Downloads 4013082 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 2953081 Laboratory Findings as Predictors of St2 and NT-Probnp Elevations in Heart Failure Clinic, National Cardiovascular Centre Harapan Kita, Indonesia
Authors: B. B. Siswanto, A. Halimi, K. M. H. J. Tandayu, C. Abdillah, F. Nanda , E. Chandra
Abstract:
Nowadays, modern cardiac biomarkers, such as ST2 and NT-proBNP, have important roles in predicting morbidity and mortality in heart failure patients. Abnormalities of serum electrolytes, sepsis or infection, and deteriorating renal function will worsen the conditions of patients with heart failure. It is intriguing to know whether cardiac biomarkers elevations are affected by laboratory findings in heart failure patients. We recruited 65 patients from the heart failure clinic in NCVC Harapan Kita in 2014-2015. All of them have consented for laboratory examination, including cardiac biomarkers. The findings were recorded in our Research and Development Centre and analyzed using linear regression to find whether there is a relationship between laboratory findings (sodium, potassium, creatinine, and leukocytes) and ST2 or NT-proBNP. From 65 patients, 26.9% of them are female, and 73.1% are male, 69.4% patients classified as NYHA I-II and 31.6% as NYHA III-IV. The mean age is 55.7+11.4 years old; mean sodium level is 136.1+6.5 mmol/l; mean potassium level is 4.7+1.9 mmol/l; mean leukocyte count is 9184.7+3622.4 /ul; mean creatinine level is 1.2+0.5 mg/dl. From linear regression logistics, the relationship between NT-proBNP and sodium level (p<0.001), as well as leukocyte count (p=0.002) are significant, while NT-proBNP and potassium level (p=0.05), as well as creatinine level (p=0.534) are not significant. The relationship between ST2 and sodium level (p=0.501), potassium level (p=0.76), leukocyte level (p=0.897), and creatinine level (p=0.817) are not significant. To conclude, laboratory findings are more sensitive in predicting NT-proBNP elevation than ST2 elevation. Larger studies are needed to prove that NT-proBNP correlation with laboratory findings is more superior than ST2.Keywords: heart failure, laboratory, NT-proBNP, ST2
Procedia PDF Downloads 3383080 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 983079 Design of Reconfigurable Fixed-Point LMS Adaptive FIR Filter
Authors: S. Padmapriya, V. Lakshmi Prabha
Abstract:
In this paper, an efficient reconfigurable fixed-point Least Mean Square Adaptive FIR filter is proposed. The proposed architecture has two methods of operation: one is area efficient design and the other is optimized power. Pipelining of the adder blocks and partial product generator are used to achieve low area and reversible logic is used to obtain low power design. Depending upon the input samples and filter coefficients, one of the techniques is chosen. Least-Mean-Square adaptation is performed to update the weights. The architecture is coded using Verilog and synthesized in cadence encounter 0.18μm technology. The synthesized results show that the area reduction ratio of the proposed when compared with conventional technique is about 1.2%.Keywords: adaptive filter, carry select adder, least mean square algorithm, reversible logic
Procedia PDF Downloads 3283078 Travel Delay and Modal Split Analysis: A Case Study
Authors: H. S. Sathish, H. S. Jagadeesh, Skanda Kumar
Abstract:
Journey time and delay study is used to evaluate the quality of service, the travel time and study can also be used to evaluate the quality of traffic movement along the route and to determine the location types and extent of traffic delays. Components of delay are boarding and alighting, issue of tickets, other causes and distance between each stops. This study investigates the total journey time required to travel along the stretch and the influence the delays. The route starts from Kempegowda Bus Station to Yelahanka Satellite Station of Bangalore City. The length of the stretch is 16.5 km. Modal split analysis has been done for this stretch. This stretch has elevated highway connecting to Bangalore International Airport and the extension of metro transit stretch. From the regression analysis of total journey time it is affected by delay due to boarding and alighting moderately, Delay due to issue of tickets affects the journey time to a higher extent. Some of the delay factors affecting significantly the journey time are evident from F-test at 10 percent level of confidence. Along this stretch work trips are more prevalent as indicated by O-D study. Modal shift analysis indicates about 70 percent of commuters are ready to shift from current system to Metro Rail System. Metro Rail System carries maximum number of trips compared to private mode. Hence Metro is a highly viable choice of mode for Bangalore Metropolitan City.Keywords: delay, journey time, modal choice, regression analysis
Procedia PDF Downloads 4943077 Farmers’ Access to Agricultural Extension Services Delivery Systems: Evidence from a Field Study in India
Authors: Ankit Nagar, Dinesh Kumar Nauriyal, Sukhpal Singh
Abstract:
This paper examines the key determinants of farmers’ access to agricultural extension services, sources of agricultural extension services preferred and accessed by the farmers. An ordered logistic regression model was used to analyse the data of the 360 sample households based on a primary survey conducted in western Uttar Pradesh, India. The study finds that farmers' decision to engage in the agricultural extension programme is significantly influenced by factors such as education level, gender, farming experience, social group, group membership, farm size, credit access, awareness about the extension scheme, farmers' perception, and distance from extension sources. The most intriguing finding of this study is that the progressive farmers, which have long been regarded as a major source of knowledge diffusion, are the most distrusted sources of information as they are suspected of withholding vital information from potential beneficiaries. The positive relationship between farm size and ‘Access’ underlines that the extension services should revisit their strategies for targeting more marginal and small farmers constituting over 85 percent of the agricultural households by incorporating their priorities in their outreach programs. The study suggests that marginal and small farmers' productive potential could still be greatly augmented by the appropriate technology, advisory services, guidance, and improved market access. Also, the perception of poor quality of the public extension services can be corrected by initiatives aimed at building up extension workers' capacity.Keywords: agriculture, access, extension services, ordered logistic regression
Procedia PDF Downloads 2143076 The Role of Attachment Styles, Gender Schemas, Sexual Self Schemas, and Body Exposures During Sexual Activity in Sexual Function, Marital Satisfaction, and Sexual Self-Esteem
Authors: Hossein Shareh, Farhad Seifi
Abstract:
The present study was to examine the role of attachment styles, gender schemas, sexual-self schemas, and body image during sexual activity in sexual function, marital satisfaction, and sexual self-esteem. The sampling method was among married women who were living in Mashhad; a snowball selected 765 people. Questionnaires and measures of adult attachment style (AAS), Bem Sex Role Inventory (BSRI), sexual self-schema (SSS), body exposure during sexual activity questionnaire (BESAQ), sexual function female inventory (FSFI), a short form of sexual self-esteem (SSEI-W-SF) and marital satisfaction (Enrich) were completed by participants. Data analysis using Pearson correlation and hierarchical regression and case analysis was performed by SPSS-19 software. The results showed that there is a significant correlation (P <0.05) between attachment and sexual function (r=0.342), marital satisfaction (r=0.351) and sexual self-esteem (r =0.292). A correlation (P <0.05) was observed between sexual schema (r=0.342) and sexual esteem (r=0.31). A meaningful correlation (P <0.05) exists between gender stereotypes and sexual function (r=0.352). There was a significant inverse correlation (P <0.05) between body image and their performance during sexual activity (r=0.41). There is no significant relationship between gender schemas, sexual schemas, body image, and marital satisfaction, and no relation was found between gender schemas, body image, and sexual self-esteem. Also, the result of the regression showed that attachment styles, gender schemas, sexual self- schemas, and body exposures during sexual activity are predictable in sexual function, and marital satisfaction can be predicted by attachment style and gender schema. Somewhat, sexual self-esteem can be expected by attachment style and gender schemas.Keywords: attachment styles, gender and sexual schemas, body image, sexual function, marital satisfaction, sexual self-esteem
Procedia PDF Downloads 383075 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure
Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar
Abstract:
This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T_1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.Keywords: collapse capacity, fragility analysis, spectral shape effects, IDA method
Procedia PDF Downloads 2383074 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 3503073 Studying in Private Muslim Schools in Australia: Implications for Identity, Religiosity, and Adjustment
Authors: Hisham Motkal Abu-Rayya, Maram Hussein Abu-Rayya
Abstract:
Education in religious private schools raises questions regarding identity, belonging and adaptation in multicultural Australia. This research project aimed at examined cultural identification styles among Australian adolescent Muslims studying in Muslim schools, adolescents’ religiosity and the interconnections between cultural identification styles, religiosity, and adaptation. Two Muslim high school samples were recruited for the purposes of this study, one from Muslim schools in metropolitan Sydney and one from Muslim schools in metropolitan Melbourne. Participants filled in a survey measuring themes of the current study. Findings revealed that the majority of Australian adolescent Muslims showed a preference for the integration identification style (55.2%); separation was less prevailing (26.9%), followed by assimilation (9.7%) and marginalisation (8.3%). Supporting evidence suggests that the styles of identification were valid representation of the participants’ identification. A series of hierarchical regression analyses revealed that while adolescents’ preference for integration of their cultural and Australian identities was advantageous for a range of their psychological and socio-cultural adaptation measures, marginalisation was consistently the worst. Further hierarchical regression analyses showed that adolescent Muslims’ religiosity was better for a range of their adaptation measures compared to their preference for an integration acculturation style. Theoretical and practical implications of these findings are discussed.Keywords: adaptation, identity, multiculturalism, religious school education
Procedia PDF Downloads 2993072 Evaluation of the Beach Erosion Process in Varadero, Matanzas, Cuba: Effects of Different Hurricane Trajectories
Authors: Ana Gabriela Diaz, Luis Fermín Córdova, Jr., Roberto Lamazares
Abstract:
The island of Cuba, the largest of the Greater Antilles, is located in the tropical North Atlantic. It is annually affected by numerous weather events, which have caused severe damage to our coastal areas. In the same way that many other coastlines around the world, the beautiful beaches of the Hicacos Peninsula also suffer from erosion. This leads to a structural regression of the coastline. If measures are not taken, the hotels will be exposed to the advance of the sea, and it will be a serious problem for the economy. With the aim of studying the intensity of this type of activity, specialists of group of coastal and marine engineering from CIH, in the framework of the research conducted within the project MEGACOSTAS 2, provide their research to simulate extreme events and assess their impact in coastal areas, mainly regarding the definition of flood volumes and morphodynamic changes in sandy beaches. The main objective of this work is the evaluation of the process of Varadero beach erosion (the coastal sector has an important impact in the country's economy) on the Hicacos Peninsula for different paths of hurricanes. The mathematical model XBeach, which was integrated into the Coastal engineering system introduced by the project of MEGACOSTA 2 to determine the area and the more critical profiles for the path of hurricanes under study, was applied. The results of this project have shown that Center area is the greatest dynamic area in the simulation of the three paths of hurricanes under study, showing high erosion volumes and the greatest average length of regression of the coastline, from 15- 22 m.Keywords: beach, erosion, mathematical model, coastal areas
Procedia PDF Downloads 2263071 XANES Studies on the Oxidation States of Copper Ion in Silicate Glass
Authors: R. Buntem, K. Samkongngam
Abstract:
The silicate glass was prepared using rice husk as the source of silica. The base composition of glass sample is composed of SiO2 (from rice husk ash), Na2CO3, K2CO3, ZnO, H3BO3, CaO, Al2O3 or Al, and CuO. Aluminum is used in place of Al2O3 in order to reduce Cu2+ to Cu+. The red color of Cu2O in the glass matrix was observed when the Al was added into the glass mixture. The expansion coefficients of the copper doped glass are in the range of 1.2 x 10-5-1.4x10-5 (ºC -1) which is common for the silicate glass. The finger prints of the bond vibrations were studied using IR spectroscopy. While the oxidation state and the coordination information of the copper ion in the glass matrix were investigated using X-ray absorption spectroscopy. From the data, Cu+ and Cu2+ exist in the glass matrix. The red particles of Cu2O can be formed in the glass matrix when enough aluminum was added.Keywords: copper in glass, coordination information, silicate glass, XANES spectrum
Procedia PDF Downloads 2613070 Grid Tied Photovoltaic Power on School Roof
Authors: Yeong-cheng Wang, Jin-Yinn Wang, Ming-Shan Lin, Jian-Li Dong
Abstract:
To universalize the adoption of sustainable energy, the R.O.C. government encourages public buildings to introduce the PV power station on the building roof, whereas most old buildings did not include the considerations of photovoltaic (PV) power facilities in the design phase. Several factors affect the PV electricity output, the temperature is the key one, different PV technologies have different temperature coefficients. Other factors like PV panel azimuth, panel inclination from the horizontal plane, and row to row distance of PV arrays, mix up at the beginning of system design. The goal of this work is to maximize the annual energy output of a roof mount PV system. Tables to simplify the design work are developed; the results can be used for engineering project quote directly.Keywords: optimal inclination, array azimuth, annual output
Procedia PDF Downloads 6753069 Analysis of the Effects of Institutions on the Sub-National Distribution of Aid Using Geo-Referenced AidData
Authors: Savas Yildiz
Abstract:
The article assesses the performance of international aid donors to determine the sub-national distribution of their aid projects dependent on recipient countries’ governance. The present paper extends the scope from a cross-country perspective to a more detailed analysis by looking at the effects of institutional qualities on the sub-national distribution of foreign aid. The analysis examines geo-referenced aid project in 37 countries and 404 regions at the first administrative division level in Sub-Saharan Africa from the World Bank (WB) and the African Development Bank (ADB) that were approved between the years 2000 and 2011. To measure the influence of institutional qualities on the distribution of aid the following measures are used: control of corruption, government effectiveness, regulatory quality and rule of law from the World Governance Indicators (WGI) and the corruption perception index from Transparency International. Furthermore, to assess the importance of ethnic heterogeneity on the sub-national distribution of aid projects, the study also includes interaction terms measuring ethnic fragmentation. The regression results indicate a general skew of aid projects towards regions which hold capital cities, however, being incumbent presidents’ birth region does not increase the allocation of aid projects significantly. Nevertheless, with increasing quality of institutions aid projects are less skewed towards capital regions and the previously estimated coefficients loose significance in most cases. Higher ethnic fragmentation also seems to impede the possibility to allocate aid projects mainly in capital city regions and presidents’ birth places. Additionally, to assess the performance of the WB based on its own proclaimed goal to aim the poor in a country, the study also includes sub-national wealth data from the Demographic and Health Surveys (DSH), and finds that, even with better institutional qualities, regions with a larger share from the richest quintile receive significantly more aid than regions with a larger share of poor people. With increasing ethnic diversity, the allocation of aid projects towards regions where the richest citizens reside diminishes, but still remains high and significant. However, regions with a larger share of poor people still do not receive significantly more aid. This might imply that the sub-national distribution of aid projects increases in general with higher ethnic fragmentation, independent of the diverse regional needs. The results provide evidence that institutional qualities matter to undermine the influence of incumbent presidents on the allocation of aid projects towards their birth regions and capital regions. Moreover, even for countries with better institutional qualities the WB and the ADB do not seem to be able to aim the poor in a country with their aid projects. Even, if one considers need-based variables, such as infant mortality and child mortality rates, aid projects do not seem to be allocated in districts with a larger share of people in need. Therefore, the study provides further evidence using more detailed information on the sub-national distribution of aid projects that aid is not being allocated effectively towards regions with a larger share of poor people to alleviate poverty in recipient countries directly. Institutions do not have any significant influence on the sub-national distribution of aid towards the poor.Keywords: aid allocation, georeferenced data, institutions, spatial analysis
Procedia PDF Downloads 1183068 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection
Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew
Abstract:
The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.
Procedia PDF Downloads 453067 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 1763066 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence
Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno
Abstract:
Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index
Procedia PDF Downloads 1683065 Understanding the Endogenous Impact of Tropical Cyclones Floods and Sustainable Landscape Management Innovations on Farm Productivity in Malawi
Authors: Innocent Pangapanga, Eric Mungatana
Abstract:
Tropical cyclones–related floods (TCRFs) in Malawi have devastating effects on smallholder agriculture, thereby threatening the food security agenda, which is already constrained by poor agricultural innovations, low use of improved varieties, and unaffordable inorganic fertilizers, and fragmenting landholding sizes. Accordingly, households have engineered and indigenously implemented sustainable landscape management (SLM) innovations to contain the adverse effects of TCRFs on farm productivity. This study, therefore, interrogated the efficacy of SLM adoption on farm productivity under varying TCRFs, while controlling for the potential selection bias and unobservable heterogeneity through the application of the Endogenous Switching Regression Model. In this study, we further investigated factors driving SLM adoption. Substantively, we found TCRFs reducing farm productivity by 31 percent, on the one hand, and influencing the adoption of SLM innovations by 27 percent, on the other hand. The study also observed that households that interacted SLM with TCRFs were more likely to enhance farm productivity by 24 percent than their counterparts. Interestingly, the study results further demonstrated that multiple adoptions of SLM-related innovations, including intercropping, agroforestry, and organic manure, enhanced farm productivity by 126 percent, suggesting promoting SLM adoption as a package to appropriately inform existing sustainable development goals’ agricultural productivity initiatives under intensifying TCRFs in the country.Keywords: tropical cyclones–related floods, sustainable landscape management innovations, farm productivity, endogeneity, endogenous switching regression model, panel data, smallholder agriculture
Procedia PDF Downloads 1153064 Establishment of a Nomogram Prediction Model for Postpartum Hemorrhage during Vaginal Delivery
Authors: Yinglisong, Jingge Chen, Jingxuan Chen, Yan Wang, Hui Huang, Jing Zhnag, Qianqian Zhang, Zhenzhen Zhang, Ji Zhang
Abstract:
Purpose: The study aims to establish a nomogram prediction model for postpartum hemorrhage (PPH) in vaginal delivery. Patients and Methods: Clinical data were retrospectively collected from vaginal delivery patients admitted to a hospital in Zhengzhou, China, from June 1, 2022 - October 31, 2022. Univariate and multivariate logistic regression were used to filter out independent risk factors. A nomogram model was established for PPH in vaginal delivery based on the risk factors coefficient. Bootstrapping was used for internal validation. To assess discrimination and calibration, receiver operator characteristics (ROC) and calibration curves were generated in the derivation and validation groups. Results: A total of 1340 cases of vaginal delivery were enrolled, with 81 (6.04%) having PPH. Logistic regression indicated that history of uterine surgery, induction of labor, duration of first labor, neonatal weight, WBC value (during the first stage of labor), and cervical lacerations were all independent risk factors of hemorrhage (P <0.05). The area-under-curve (AUC) of ROC curves of the derivation group and the validation group were 0.817 and 0.821, respectively, indicating good discrimination. Two calibration curves showed that nomogram prediction and practical results were highly consistent (P = 0.105, P = 0.113). Conclusion: The developed individualized risk prediction nomogram model can assist midwives in recognizing and diagnosing high-risk groups of PPH and initiating early warning to reduce PPH incidence.Keywords: vaginal delivery, postpartum hemorrhage, risk factor, nomogram
Procedia PDF Downloads 733063 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1163062 Evaluating Traffic Congestion Using the Bayesian Dirichlet Process Mixture of Generalized Linear Models
Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig
Abstract:
This study applied traffic speed and occupancy to develop clustering models that identify different traffic conditions. Particularly, these models are based on the Dirichlet Process Mixture of Generalized Linear regression (DML) and change-point regression (CR). The model frameworks were implemented using 2015 historical traffic data aggregated at a 15-minute interval from an Interstate 295 freeway in Jacksonville, Florida. Using the deviance information criterion (DIC) to identify the appropriate number of mixture components, three traffic states were identified as free-flow, transitional, and congested condition. Results of the DML revealed that traffic occupancy is statistically significant in influencing the reduction of traffic speed in each of the identified states. Influence on the free-flow and the congested state was estimated to be higher than the transitional flow condition in both evening and morning peak periods. Estimation of the critical speed threshold using CR revealed that 47 mph and 48 mph are speed thresholds for congested and transitional traffic condition during the morning peak hours and evening peak hours, respectively. Free-flow speed thresholds for morning and evening peak hours were estimated at 64 mph and 66 mph, respectively. The proposed approaches will facilitate accurate detection and prediction of traffic congestion for developing effective countermeasures.Keywords: traffic congestion, multistate speed distribution, traffic occupancy, Dirichlet process mixtures of generalized linear model, Bayesian change-point detection
Procedia PDF Downloads 2923061 Unsteady Temperature Distribution in a Finite Functionally Graded Cylinder
Authors: A. Amiri Delouei
Abstract:
In the current study, two-dimensional unsteady heat conduction in a functionally graded cylinder is studied analytically. The temperature distribution is in radial and longitudinal directions. Heat conduction coefficients are considered a power function of radius both in radial and longitudinal directions. The proposed solution can exactly satisfy the boundary conditions. Analytical unsteady temperature distribution for different parameters of functionally graded cylinder is investigated. The achieved exact solution is useful for thermal stress analysis of functionally graded cylinders. Regarding the analytical approach, this solution can be used to understand the concepts of heat conduction in functionally graded materials.Keywords: functionally graded materials, unsteady heat conduction, cylinder, temperature distribution
Procedia PDF Downloads 299