Search results for: asymptotic variance
1073 Palliative Care Referral Behavior Among Nurse Practitioners in Hospital Medicine
Authors: Sharon Jackson White
Abstract:
Purpose: Nurse practitioners (NPs) practicing within hospital medicine play a significant role in caring for patients who might benefit from palliative care (PC) services. Using the Theory of Planned Behavior, the purpose of this study was to examine the relationships among facilitators to referral, barriers to referral, self-efficacy with end-of-life discussions, history of referral, and referring to PC among NPs in hospital medicine. Hypotheses: 1) Perceived facilitators to referral will be associated with a higher history of referral and a higher number of referrals to PC. 2) Perceived barriers to referral will be associated with a lower history of referral and a lower number of referrals to PC. 3) Increased self-efficacy with end-of-life discussions will be associated with a higher history of referral and a higher number of referrals to PC. 4) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the history of referral to PC. 5) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the number of referrals to PC. Significance: Previous studies of referring patients to PC within the hospital setting care have focused on physician practices. Identifying factors that influence NPs referring hospitalized patients to PC is essential to ensure that patients have access to these important services. This study incorporates the SNRS mission of advancing nursing research through the dissemination of research findings and the promotion of nursing science. Methods: A cross-sectional, predictive correlational study was conducted. History of referral to PC, facilitators to referring to PC, barriers to referring to PC, self-efficacy in end-of-life discussions, and referral to PC were measured using the PC referral case study survey, facilitators and barriers to PC referral survey, and self-assessment with end-of-life discussions survey. Data were analyzed descriptively and with Pearson’s Correlation, Spearman’s Rho, point-biserial correlation, multiple regression, logistic regression, Chi-Square test, and the Mann-Whitney U test. Results: Only one facilitator (PC team being helpful with establishing goals of care) was significantly associated with referral to PC. Three variables were statistically significant in relation to the history of referring to PC: “Inclined to refer: PC can help decrease the length of stay in hospital”, “Most inclined to refer: Patients with serious illnesses and/or poor prognoses”, and “Giving bad news to a patient or family member”. No predictor variables contributed a significant variance in the number of referrals to PC for all three case studies. There were no statistically significant results showing a relationship between the history of referral and referral to PC. All five hypotheses were partially supported. Discussion: Findings from this study emphasize the need for further research on NPs who work in hospital settings and what factors influence their behaviors of referring to PC. Since there is an increase in NPs practicing within hospital settings, future studies should use a larger sample size and incorporate hospital medicine NPs and other types of NPs that work in hospitals.Keywords: palliative care, nurse practitioners, hospital medicine, referral
Procedia PDF Downloads 751072 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter
Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball
Abstract:
The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS
Procedia PDF Downloads 461071 Gc-ms Data Integrated Chemometrics for the Authentication of Vegetable Oil Brands in Minna, Niger State, Nigeria
Authors: Rasaq Bolakale Salau, Maimuna Muhammad Abubakar, Jonathan Yisa, Muhammad Tauheed Bisiriyu, Jimoh Oladejo Tijani, Alexander Ifeanyi Ajai
Abstract:
Vegetables oils are widely consumed in Nigeria. This has led to competitive manufacture of various oil brands. This leads increasing tendencies for fraud, labelling misinformation and other unwholesome practices. A total of thirty samples including raw and corresponding branded samples of vegetable oils were collected. The Oils were extracted from raw ground nut, soya bean and oil palm fruits. The GC-MS data was subjected to chemometric techniques of PCA and HCA. The SOLO 8.7 version of the standalone chemometrics software developed by Eigenvector research incorporated and powered by PLS Toolbox was used. The GCMS fingerprint gave basis for discrimination as it reveals four predominant but unevenly distributed fatty acids: Hexadecanoic acid methyl ester (10.27- 45.21% PA), 9,12-octadecadienoic acid methyl ester (10.9 - 45.94% PA), 9-octadecenoic acid methyl ester (18.75 - 45.65%PA), and Eicosanoic acid methyl ester (1.19% - 6.29%PA). In PCA modelling, two PCs are retained at cumulative variance captured at 73.15%. The score plots indicated that palm oil brands are most aligned with raw palm oil. PCA loading plot reveals the signature retention times between 4.0 and 6.0 needed for quality assurance and authentication of the oils samples. They are of aromatic hydrocarbons, alcohols and aldehydes functional groups. HCA dendrogram which was modeled using Euclidian distance through Wards method, indicated co-equivalent samples. HCA revealed the pair of raw palm oil brand and palm oil brand in the closest neighbourhood (± 1.62 % A difference) based on variance weighted distance. It showed Palm olein brand to be most authentic. In conclusion, based on the GCMS data with chemometrics, the authenticity of the branded samples is ranked as: Palm oil > Soya oil > groundnut oil.Keywords: vegetable oil, authenticity, chemometrics, PCA, HCA, GC-MS
Procedia PDF Downloads 351070 Landmark Based Catch Trends Assessment of Gray Eel Catfish (Plotosus canius) at Mangrove Estuary in Bangladesh
Authors: Ahmad Rabby
Abstract:
The present study emphasizing the catch trends assessment of Gray eel catfish (Plotosus canius) that was scrutinized on the basis of monthly length frequency data collected from mangrove estuary, Bangladesh during January 2017 to December 2018. A total amount of 1298 specimens were collected to estimate the total length (TL) and weight (W) of P. canius ranged from 13.3 cm to 87.4 cm and 28 g to 5200 g, respectively. The length-weight relationship was W=0.006 L2.95 with R2=0.972 for both sexes. The von Bertalanffy growth function parameters were L∞=93.25 cm and K=0.28 yr-1, hypothetical age at zero length of t0=0.059 years and goodness of the fit of Rn=0.494. The growth performances indices for L∞ and W∞ were computed as Φ'=3.386 and Φ=1.84, respectively. The size at first sexual maturity was estimated in TL as 48.8 cm for pool sexes. The natural mortality was 0.51 yr-1 at average annual water surface temperature as 22 0C. The total instantaneous mortality was 1.24 yr-1 at CI95% of 0.105–1.42 (r2=0.986). While fishing mortality was 0.73 yr-1 and the current exploitation ratio as 0.59. The recruitment was continued throughout the year with one major peak during May-June was 17.20-17.96%. The Beverton-Holt yield per recruit model was analyzed by FiSAT-II, when tc was at 1.43 yr, the Fmax was estimated as 0.6 yr-1 and F0.1 was 0.33 yr-1. Current age at the first capture was approximately 0.6 year, however Fcurrent = 0.73 yr-1 which is beyond the F0.1 indicated that the current stock of P. canius of Bangladesh was overexploited.Keywords: Plotosus canius, mangrove estuary, asymptotic length, FiSAT-II
Procedia PDF Downloads 1511069 Predictors of School Drop out among High School Students
Authors: Osman Zorbaz, Selen Demirtas-Zorbaz, Ozlem Ulas
Abstract:
The factors that cause adolescents to drop out school were several. One of the frameworks about school dropout focuses on the contextual factors around the adolescents whereas the other one focuses on individual factors. It can be said that both factors are important equally. In this study, both adolescent’s individual factors (anti-social behaviors, academic success) and contextual factors (parent academic involvement, parent academic support, number of siblings, living with parent) were examined in the term of school dropout. The study sample consisted of 346 high school students in the public schools in Ankara who continued their education in 2015-2016 academic year. One hundred eighty-five the students (53.5%) were girls and 161 (46.5%) were boys. In addition to this 118 of them were in ninth grade, 122 of them in tenth grade and 106 of them were in eleventh grade. Multiple regression and one-way ANOVA statistical methods were used. First, it was examined if the data meet the assumptions and conditions that are required for regression analysis. After controlling the assumptions, regression analysis was conducted. Parent academic involvement, parent academic support, number of siblings, anti-social behaviors, academic success variables were taken into the regression model and it was seen that parent academic involvement (t=-3.023, p < .01), anti-social behaviors (t=7.038, p < .001), and academic success (t=-3.718, p < .001) predicted school dropout whereas parent academic support (t=-1.403, p > .05) and number of siblings (t=-1.908, p > .05) didn’t. The model explained 30% of the variance (R=.557, R2=.300, F5,345=30.626, p < .001). In addition to this the variance, results showed there was no significant difference on high school students school dropout levels according to living with parents or not (F2;345=1.183, p > .05). Results discussed in the light of the literature and suggestion were made. As a result, academic involvement, academic success and anti-social behaviors will be considered as an important factors for preventing school drop-out.Keywords: adolescents, anti-social behavior, parent academic involvement, parent academic support, school dropout
Procedia PDF Downloads 2871068 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 2251067 Role of Additional Food Resources in an Ecosystem with Two Discrete Delays
Authors: Ankit Kumar, Balram Dubey
Abstract:
This study proposes a three dimensional prey-predator model with additional food, provided to predator individuals, including gestation delay in predators and delay in supplying the additional food to predators. It is assumed that the interaction between prey and predator is followed by Holling type-II functional response. We discussed the steady states and their local and global asymptotic behavior for the non-delayed system. Hopf-bifurcation phenomenon with respect to different parameters has also been studied. We obtained a range of predator’s tendency factor on provided additional food, in which the periodic solutions occur in the system. We have shown that oscillations can be controlled from the system by increasing the tendency factor. Moreover, the existence of periodic solutions via Hopf-bifurcation is shown with respect to both the delays. Our analysis shows that both delays play an important role in governing the dynamics of the system. It changes the stability behavior into instability behavior. The direction and stability of Hopf-bifurcation are also investigated through the normal form theory and the center manifold theorem. Lastly, some numerical simulations and graphical illustrations have been carried out to validate our analytical findings.Keywords: additional food, gestation delay, Hopf-bifurcation, prey-predator
Procedia PDF Downloads 1311066 Impacts of Climate Change on Food Grain Yield and Its Variability across Seasons and Altitudes in Odisha
Authors: Dibakar Sahoo, Sridevi Gummadi
Abstract:
The focus of the study is to empirically analyse the climatic impacts on foodgrain yield and its variability across seasons and altitudes in Odisha, one of the most vulnerable states in India. The study uses Just-Pope Stochastic Production function by using two-step Feasible Generalized Least Square (FGLS): mean equation estimation and variance equation estimation. The study uses the panel data on foodgrain yield, rainfall and temperature for 13 districts during the period 1984-2013. The study considers four seasons: winter (December-February), summer (March-May), Rainy (June-September) and autumn (October-November). The districts under consideration have been categorized under three altitude regions such as low (< 70 masl), middle (153-305 masl) and high (>305 masl) altitudes. The results show that an increase in the standard deviations of monthly rainfall during rainy and autumn seasons have an adversely significant impact on the mean yield of foodgrains in Odisha. The summer temperature has beneficial effects by significantly increasing mean yield as the summer season is associated with harvesting stage of Rabi crops. The changing pattern of temperature has increasing effect on the yield variability of foodgrains during the summer season, whereas it has a decreasing effect on yield variability of foodgrains during the Rainy season. Moreover, the positive expected signs of trend variable in both mean and variance equation suggests that foodgrain yield and its variability increases with time. On the other hand, a change in mean levels of rainfall and temperature during different seasons has heterogeneous impacts either harmful or beneficial depending on the altitudes. These findings imply that adaptation strategies should be tailor-made to minimize the adverse impacts of climate change and variability for sustainable development across seasons and altitudes in Odisha agriculture.Keywords: altitude, adaptation strategies, climate change, foodgrain
Procedia PDF Downloads 2421065 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University
Authors: Mukisa Simon Peter Turker, Etomaru Irene
Abstract:
This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors
Procedia PDF Downloads 621064 Exploring the Relationships between Cyberbullying Perceptions and Facebook Attitudes of Turkish Students
Authors: Yavuz Erdoğan, Hidayet Çiftçi
Abstract:
Cyberbullying, a phenomenon among adolescents, is defined as actions that use information and communication technologies such as social media to support deliberate, repeated, and hostile behaviour by an individual or group. With the advancement in communication and information technology, cyberbullying has expanded its boundaries among students in schools. Thus, parents, psychologists, educators, and lawmakers must become aware of the potential risks of this phenomenon. In the light of these perspectives, this study aims to investigate the relationships between cyberbullying perception and Facebook attitudes of Turkish students. A survey method was used for the study and the data were collected by “Cyberbullying Perception Scale”, “Facebook Attitude Scale” and “Personal Information Form”. For this purpose, study has been conducted during 2014-2015 academic year, with a total of 748 students with 493 male (%65.9) and 255 female (%34.1) from randomly selected high schools. In the analysis of data Pearson correlation and multiple regression analysis, multivariate analysis of variance (MANOVA) and Scheffe post hoc test has been used. At the end of the study, the results displayed a negative correlation between Turkish students’ Facebook attitudes and cyberbullying perception (r=-.210; p<0.05). In order to identify the predictors of students’ cyberbullying perception, multiple regression analysis was used. As a result, significant relations were detected between cyberbullying perception and independent variables (F=5.102; p<0.05). Independent variables together explain 11.0% of the total variance in cyberbullying scores. The variables that significantly predict the students’ cyberbullying perception are Facebook attitudes (t=-5.875; p<0.05), and gender (t=3.035; p<0.05). In order to calculate the effects of independent variables on students’ Facebook attitudes and cyberbullying perception MANOVA was conducted. The results of the MANOVA indicate that the Facebook attitudes and cyberbullying perception were significantly differed according to students’ gender, age, educational attainment of the mother, educational attainment of the father, income of the family and daily usage of internet.Keywords: facebook, cyberbullying, attitude, internet usage
Procedia PDF Downloads 4021063 Revealing the Urban Heat Island: Investigating its Spatial and Temporal Changes and Relationship with Air Quality
Authors: Aneesh Mathew, Arunab K. S., Atul Kumar Sharma
Abstract:
The uncontrolled rise in population has led to unplanned, swift, and unsustainable urban expansion, causing detrimental environmental impacts on both local and global ecosystems. This research delves into a comprehensive examination of the Urban Heat Island (UHI) phenomenon in Bengaluru and Hyderabad, India. It centers on the spatial and temporal distribution of UHI and its correlation with air pollutants. Conducted across summer and winter seasons from 2001 to 2021 in Bangalore and Hyderabad, this study discovered that UHI intensity varies seasonally, peaking in summer and decreasing in winter. The annual maximum UHI intensities range between 4.65 °C to 6.69 °C in Bengaluru and 5.74 °C to 6.82 °C in Hyderabad. Bengaluru particularly experiences notable fluctuations in average UHI intensity. Introducing the Urban Thermal Field Variance Index (UTFVI), the study indicates a consistent strong UHI effect in both cities, significantly impacting living conditions. Moreover, hotspot analysis demonstrates a rising trend in UHI-affected areas over the years in Bengaluru and Hyderabad. This research underscores the connection between air pollutant concentrations and land surface temperature (LST), highlighting the necessity of comprehending UHI dynamics for urban environmental management and public health. It contributes to a deeper understanding of UHI patterns in swiftly urbanizing areas, providing insights into the intricate relationship between urbanization, climate, and air quality. These findings serve as crucial guidance for policymakers, urban planners, and researchers, facilitating the development of innovative, sustainable strategies to mitigate the adverse impacts of uncontrolled expansion while promoting the well-being of local communities and the global environment.Keywords: urban heat island effect, land surface temperature, air pollution, urban thermal field variance index
Procedia PDF Downloads 821062 Implication of the Exchange-Correlation on Electromagnetic Wave Propagation in Single-Wall Carbon Nanotubes
Authors: A. Abdikian
Abstract:
Using the linearized quantum hydrodynamic model (QHD) and by considering the role of quantum parameter (Bohm’s potential) and electron exchange-correlation potential in conjunction with Maxwell’s equations, electromagnetic wave propagation in a single-walled carbon nanotubes was studied. The electronic excitations are described. By solving the mentioned equations with appropriate boundary conditions and by assuming the low-frequency electromagnetic waves, two general expressions of dispersion relations are derived for the transverse magnetic (TM) and transverse electric (TE) modes, respectively. The dispersion relations are analyzed numerically and it was found that the dependency of dispersion curves with the exchange-correlation effects (which have been ignored in previous works) in the low frequency would be limited. Moreover, it has been realized that asymptotic behaviors of the TE and TM modes are similar in single wall carbon nanotubes (SWCNTs). The results show that by adding the function of electron exchange-correlation potential lead to the phenomena and make to extend the validity range of QHD model. The results can be important in the study of collective phenomena in nanostructures.Keywords: transverse magnetic, transverse electric, quantum hydrodynamic model, electron exchange-correlation potential, single-wall carbon nanotubes
Procedia PDF Downloads 4531061 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence
Procedia PDF Downloads 1291060 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 3301059 The Effects of Some Organic Amendments on Sediment Yield, Splash Loss, and Runoff of Soils of Selected Parent Materials in Southeastern Nigeria
Authors: Leonard Chimaobi Agim, Charles Arinzechukwu Igwe, Emmanuel Uzoma Onweremadu, Gabreil Osuji
Abstract:
Soil erosion has been linked to stream sedimentation, ecosystem degradation, and loss of soil nutrients. A study was conducted to evaluate the effect of some organic amendment on sediment yield, splash loss, and runoff of soils of selected parent materials in southeastern Nigeria. A total of 20 locations, five from each of four parent materials namely: Asu River Group (ARG), Bende Ameki Group (BAG), Coastal Plain Sand (CPS) and Falsebedded Sandstone (FBS) were used for the study. Collected soil samples were analyzed with standard methods for the initial soil properties. Rainfall simulation at an intensity of 190 mm hr-1was conducted for 30 minutes on the soil samples at both the initial stage and after amendment to obtain erosion parameters. The influence of parent material on sediment yield, splash loss and runoff based on rainfall simulation was tested for using one way analyses of variance, while the influence of organic material and their combinations were a factorially fitted in a randomized complete block design. The organic amendments include; goat dropping (GD), poultry dropping (PD), municipal solid waste (MSW) and their combinations (COA) applied at four rates of 0, 10, 20 and 30 t ha-1 respectively. Data were analyzed using analyses of variance suitable for a factorial experiment. Significant means were separated using LSD at 5 % probability levels. Result showed significant (p ≤ 0.05) lower values of sediment yield, splash loss and runoff following amendment. For instance, organic amendment reduced sediment yield under wet and dry runs by 12.91 % and 26.16% in Ishiagu, 40.76% and 45.67%, in Bende, 16.17% and 50% in Obinze and 22.80% and 42.35% in Umulolo respectively. Goat dropping and combination of amendment gave the best results in reducing sediment yield.Keywords: organic amendment, parent material, rainfall simulation, soil erosion
Procedia PDF Downloads 3451058 Instability by Weak Precession of the Flow in a Rapidly Rotating Sphere
Authors: S. Kida
Abstract:
We consider the flow of an incompressible viscous fluid in a precessing sphere whose spin and precession axes are orthogonal to each other. The flow is characterized by two non-dimensional parameters, the Reynolds number Re and the Poincare number Po. For which values of (Re, Po) will the flow approach a steady state from an arbitrary initial condition? To answer it we are searching the instability boundary of the steady states in the whole (Re, Po) plane. Here, we focus the rapidly rotating and weakly precessing limit, i.e., Re >> 1 and Po << 1. The steady flow was obtained by the asymptotic expansion for small ε=Po Re¹/² << 1. The flow exhibits nearly a solid-body rotation in the whole sphere except for a thin boundary layer which develops over the sphere surface. The thickness of this boundary layer is of O(δ), where δ=Re⁻¹/², except where two circular critical bands of thickness of O(δ⁴/⁵) and of width of O(δ²/⁵) which are located away from the spin axis by about 60°. We perform the linear stability analysis of the steady flow. We assume that the disturbances are localized in the critical bands and make an expansion analysis in terms of ε to derive the eigenvalue problem for the growth rate of the disturbance, which is solved numerically. As the solution, we obtain an asymptote of the stability boundary as Po=28.36Re⁻⁰.⁸. This agrees excellently with the corresponding laboratory experiments and numerical simulations. One of the most popular instability mechanisms so far is the parametric instability, which turns out, however, not to give the correct stability boundary. The present instability is different from the parametric instability.Keywords: boundary layer, critical band, instability, precessing sphere
Procedia PDF Downloads 1551057 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type
Authors: Jorge Gonzalez Camus, Carlos Lizama
Abstract:
In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations
Procedia PDF Downloads 1601056 Survey of Methods for Solutions of Spatial Covariance Structures and Their Limitations
Authors: Joseph Thomas Eghwerido, Julian I. Mbegbu
Abstract:
In modelling environment processes, we apply multidisciplinary knowledge to explain, explore and predict the Earth's response to natural human-induced environmental changes. Thus, the analysis of spatial-time ecological and environmental studies, the spatial parameters of interest are always heterogeneous. This often negates the assumption of stationarity. Hence, the dispersion of the transportation of atmospheric pollutants, landscape or topographic effect, weather patterns depends on a good estimate of spatial covariance. The generalized linear mixed model, although linear in the expected value parameters, its likelihood varies nonlinearly as a function of the covariance parameters. As a consequence, computing estimates for a linear mixed model requires the iterative solution of a system of simultaneous nonlinear equations. In other to predict the variables at unsampled locations, we need to know the estimate of the present sampled variables. The geostatistical methods for solving this spatial problem assume covariance stationarity (locally defined covariance) and uniform in space; which is not apparently valid because spatial processes often exhibit nonstationary covariance. Hence, they have globally defined covariance. We shall consider different existing methods of solutions of spatial covariance of a space-time processes at unsampled locations. This stationary covariance changes with locations for multiple time set with some asymptotic properties.Keywords: parametric, nonstationary, Kernel, Kriging
Procedia PDF Downloads 2561055 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction
Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter
Abstract:
Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA
Procedia PDF Downloads 1811054 THRAP2 Gene Identified as a Candidate Susceptibility Gene of Thyroid Autoimmune Diseases Pedigree in Tunisian Population
Authors: Ghazi Chabchoub, Mouna Feki, Mohamed Abid, Hammadi Ayadi
Abstract:
Autoimmune thyroid diseases (AITDs), including Graves’ disease (GD) and Hashimoto’s thyroiditis (HT), are inherited as complex traits. Genetic factors associated with AITDs have been tentatively identified by candidate gene and genome scanning approaches. We analysed three intragenic microsatellite markers in the thyroid hormone receptor associated protein 2 gene (THRAP2), mapped near D12S79 marker, which have a potential role in immune function and inflammation [THRAP2-1(TG)n, THRAP2-2 (AC)n and THRAP2-3 (AC)n]. Our study population concerned 12 patients affected with AITDs belonging to a multiplex Tunisian family with high prevalence of AITDs. Fluorescent genotyping was carried out on ABI 3100 sequencers (Applied Biosystems USA) with the use of GENESCAN for semi-automated fragment sizing and GENOTYPER peak-calling software. Statistical analysis was performed using the non parametric Lod score (NPL) by Merlin software. Merlin outputs non-parametric NPLall (Z) and LOD scores and their corresponding asymptotic P values. The analysis for three intragenic markers in the THRAP2 gene revealed strong evidence for linkage (NPL=3.68, P=0.00012). Our results suggested the possible role of THRAP2 gene in AITDs susceptibility in this family.Keywords: autoimmunity, autoimmune disease, genetic, linkage analysis
Procedia PDF Downloads 1261053 Characterising Indigenous Chicken (Gallus gallus domesticus) Ecotypes of Tigray, Ethiopia: A Combined Approach Using Ecological Niche Modelling and Phenotypic Distribution Modelling
Authors: Gebreslassie Gebru, Gurja Belay, Minister Birhanie, Mulalem Zenebe, Tadelle Dessie, Adriana Vallejo-Trujillo, Olivier Hanotte
Abstract:
Livestock must adapt to changing environmental conditions, which can result in either phenotypic plasticity or irreversible phenotypic change. In this study, we combine Ecological Niche Modelling (ENM) and Phenotypic Distribution Modelling (PDM) to provide a comprehensive framework for understanding the ecological and phenotypic characteristics of indigenous chicken (Gallus gallus domesticus) ecotypes. This approach helped us to classify these ecotypes, differentiate their phenotypic traits, and identify associations between environmental variables and adaptive traits. We measured 297 adult indigenous chickens from various agro-ecologies, including 208 females and 89 males. A subset of the 22 measured traits was selected using stepwise selection, resulting in seven traits for each sex. Using ENM, we identified four agro-ecologies potentially harbouring distinct phenotypes of indigenous Tigray chickens. However, PDM classified these chickens into three phenotypical ecotypes. Chickens grouped in ecotype-1 and ecotype-3 exhibited superior adaptive traits compared to those in ecotype-2, with significant variance observed. This high variance suggests a broader range of trait expression within these ecotypes, indicating greater adaptation capacity and potentially more diverse genetic characteristics. Several environmental variables, such as soil clay content, forest cover, and mean temperature of the wettest quarter, were strongly associated with most phenotypic traits. This suggests that these environmental factors play a role in shaping the observed phenotypic variations. By integrating ENM and PDM, this study enhances our understanding of indigenous chickens' ecological and phenotypic diversity. It also provides valuable insights into their conservation and management in response to environmental changes.Keywords: adaptive traits, agro-ecology, appendage, climate, environment, imagej, morphology, phenotypic variation
Procedia PDF Downloads 371052 Implementation of Statistical Parameters to Form an Entropic Mathematical Models
Authors: Gurcharan Singh Buttar
Abstract:
It has been discovered that although these two areas, statistics, and information theory, are independent in their nature, they can be combined to create applications in multidisciplinary mathematics. This is due to the fact that where in the field of statistics, statistical parameters (measures) play an essential role in reference to the population (distribution) under investigation. Information measure is crucial in the study of ambiguity, assortment, and unpredictability present in an array of phenomena. The following communication is a link between the two, and it has been demonstrated that the well-known conventional statistical measures can be used as a measure of information.Keywords: probability distribution, entropy, concavity, symmetry, variance, central tendency
Procedia PDF Downloads 1571051 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 1501050 Modeling Corruption Dynamics Within Bono and Ahafo Police Service in Ghana
Authors: Adam Ahmed Hosney
Abstract:
The existence of a culture of corruption within an institution, such as the police, could be a sign of failure from various angles. There is a general perception among Ghanaians that the most corrupt institution is the police service. The purpose of this study is to formulate and analyze a nonlinear mathematical model to investigate corruption as an epidemic within the Ghana police service, this study revealed the basic reproduction number for corruption extinction and corruption survival. The threshold conditions for all kinds of equilibrium points are obtained using linearization methods and Lyapunov functional methods, and they demonstrate local asymptotic stability for both corrupt endemic and corrupt free equilibrium states. The model was analyzed qualitatively, and the solution was derived. The model appears to be positively invariant and attractive. Therefore, the region exhibits positive invariance. Thus, it is adequate to think about the dynamics of the model. For the purpose of illustrating the solution, the graphic result was presented and discussed. Results show that corruption will die out within the police service if the government shows no tolerance for those involved in corrupt practices. Study findings indicate that leaders should be trustworthy, demonstrate a complete and viable commitment to addressing corruption, and make it a priority to provide mass education to all citizens as well as using religious leaders to fight corruption since most Ghanaians are religious and trust their leaders.Keywords: mathematical model, differential equation, dynamical system, simulation
Procedia PDF Downloads 291049 Patients' Quality of Life and Caregivers' Burden of Parkinson's Disease
Authors: Kingston Rajiah, Mari Kannan Maharajan, Si Jen Yeen, Sara Lew
Abstract:
Parkinson’s disease (PD) is a progressive neurodegenerative disorder with evolving layers of complexity. Both motor and non-motor symptoms of PD may affect patients’ quality of life (QoL). Life expectancy for an individual with Parkinson’s disease depends on the level of care the individual has access to, can have a direct impact on length of life. Therefore, improvement of the QoL is a significant part of therapeutic plans. Patients with PD, especially those who are in advanced stages, are in great need of assistance, mostly from their family members or caregivers in terms of medical, emotional, and social support. The role of a caregiver becomes increasingly important with the progression of PD, the severity of motor impairment and increasing age of the patient. The nature and symptoms associated with PD can place significant stresses on the caregivers’ burden. As the prevalence of PD is estimated to more than double by 2030, it is important to recognize and alleviate the burden experienced by caregivers. This study focused on the impact of the clinical features on the QoL of PD patients, and of their caregivers. This study included PD patients along with their caregivers and was undertaken at the Malaysian Parkinson's Disease Association from June 2016 to November 2016. Clinical features of PD patients were assessed using the Movement Disorder Society revised Unified Parkinson Disease Rating Scale (MDS-UPDRS); the Hoehn and Yahr Staging of Parkinson's Disease were used to assess the severity and Parkinson's disease activities of daily living scale were used to assess the disability of Parkinson’s disease patients. QoL of PD patients was measured using the Parkinson's Disease Questionnaire-39 (PDQ-39). The revised version of the Zarit Burden Interview assessed caregiver burden. At least one of the clinical features affected PD patients’ QoL, and at least one of the QoL domains affected the caregivers’ burden. Clinical features ‘Saliva and Drooling’, and ‘Dyskinesia’ explained 29% of variance in QoL of PD patients. The QoL domains ‘stigma’, along with ‘emotional wellbeing’ explained 48.6% of variance in caregivers’ burden. Clinical features such as saliva, drooling and dyskinesia affected the QoL of PD patients. The PD patients’ QoL domains such as ‘stigma’ and ‘emotional well-being’ influenced their caregivers’ burden.Keywords: carers, quality of life, clinical features, Malaysia
Procedia PDF Downloads 2461048 Glycemic Control on Self-Efficacy and Self-Care Behaviors among Omani Adults with Type 2 Diabetes
Authors: Melba Sheila D'Souza, Anandhi Amirtharaj, Shreedevi Balachandran
Abstract:
Background: Type 2 diabetes has a significant impact on individuals’ health and well-being. Glycemic control may influence self-efficacy and self-care behaviors, and reduce the risk of complications among adults with type 2 diabetes. Type 2 diabetes has substantial morbidity and mortality and 60% of adults’ poor self-care. Glycemic control is associated with reported self-efficacy and self-care behavior. Adults with type 2 diabetes with less information were less likely to take diabetes self-care. Aim: To examine the relationship between glycemic control, demographic factors, clinical factors on self-efficacy, self-care behaviors among Omani adults with type 2 diabetes. Methods: A correlational, descriptive study was used. Omani adults with type 2 diabetes (n=140) were recruited from a public hospital in Oman. The data were collected during January-March 2015. Ethical approval was given by the college research and ethics committee, College of Nursing, and the Hospital, Sultan Qaboos University Data was collected on self-efficacy, self-care behaviors and glycemic control. The study was approved by the Institution Ethics and Research Committee. Bivariate and multivariate analyses were conducted. Results: Most adults had a fasting blood glucose >7.2mmol/L (90.7%), with the majority demonstrating ‘uncontrolled or poor HbA1c of > 8%’ (65%). Variance of self-care behavior (20.6%) and 31.3% of the variance of the self-efficacy was explained by the age, duration of diabetes, medication, HbA1c and prevention of activities of living. Adults with type 2 diabetes with poor glycemic control were more likely to have poor self-efficacy and poor self-care behaviors. Conclusion: This study confirms that self-efficacy model on outcome predicts self-efficacy and self-care behavior. Higher understanding of diabetes, prevention of normal daily activities, higher ability to fit diabetes life in a positive manner and high patient-physician communication were significant with self-efficacy and self-care behaviors. Hence, glycemic control has a high effect on improving self-care behaviors like diet, exercise, medication, foot care and self-efficacy among type 2 diabetes. Implications: Using these findings to improve self-efficacy, individualized self-care management is recommended for better self-efficacy and self-care behaviors among adults with type 2 diabetes.Keywords: self-efficacy, self-care behaviors, self-care management, glycemic control, type 2 diabetes, nurse
Procedia PDF Downloads 4111047 Synergism in the Inquiry Lab: An Analysis of Time Targets and Achievement
Authors: John M. Basey, Clinton D. Francis, Maxwell B. Joseph
Abstract:
After gathering data from experimental procedures, inquiry-oriented-science labs often allow students the freedom to stay and complete the write up in class or leave lab early and complete the write up later. Teachers must decide whether to allow students this freedom to self-regulate this time. Student interviews have indicated four time-target strategies that may influence how students utilize this time: grade-target-A, grade-target-C, time-limited, and proficiency. The hypothesis tested was that variability in class composition relative to the four grade-target strategies has an impact on when students leave class, which in turn may influence their overall learning as exemplified by grades. Students were divided into the four indicated groups with a survey. Class composition and the GTA teaching the class had significant impacts on how long students stayed in class with class composition having the greatest impact. A factor analysis identified two factors. Factor 1 included classes with percentages of grade-target students opposite time-limited/proficiency students and explained 43% of the variance. Factor 2 included classes with percentages of grade-target-A/proficiency students opposite grade-target-C students and explained 33% of the variance. Students who stayed longer received significantly higher grades (P = 0.008) with no significant relationships between grade and Factor 1 or Factor 2 (P > 0.05). The time students stayed in class was significantly positively related to Factor 1 (P = 0.006) and significantly negatively related to Factor 2 (P = 0.008). These results support the hypothesis and indicate that teachers may want to know the composition of student-target strategies before deciding on how to have students allocate study time at the end of inquiry-oriented labs. According to these results, ideal classes for self-regulation have a high proportion of proficiency and time-limited students and a low proportion of grade-target students, or a high proportion of grade-target-A and proficiency students and a low proportion of grade-target-C students. Non-ideal classes for self-regulation were comprised of the inverse proportions.Keywords: grades, inquiry lab design, synergism in student motivation, class composition
Procedia PDF Downloads 1311046 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming
Authors: William Chung
Abstract:
This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems
Procedia PDF Downloads 1671045 The Study of the Absorption and Translocation of Chromium by Lygeum spartum in the Mining Region of Djebel Hamimat and Soil-Plant Interaction
Authors: H. Khomri, A. Bentellis
Abstract:
Since century of the Development Activities extraction and a dispersed mineral processing Toxic metals and much more contaminated vast areas occupied by what they natural outcrops. New types of metalliferous habitats are so appeared. A species that is Lygeum spartum attracted our curiosity because apart from its valuable role in desertification, it is apparently able to exclude antimony and other metals can be. This species, green leaf blades which are provided as cattle feed, would be a good subject for phytoremediation of mineral soils. The study of absorption and translocation of chromium by the Lygeum spartum in the mining region of Djebel Hamimat and the interaction soil-plant, revealed that soils of this species living in this region are alkaline, calcareous majority in their fine texture medium and saline in their minority. They have normal levels of organic matter. They are moderately rich in nitrogen. They contain total chromium content reaches a maximum of 66,80 mg Kg^(-1) and a total absence of soluble chromium. The results of the analysis of variance of the difference between bare soils and soils appear Lygeum spartum made a significant difference only for the silt and organic matter. But for the other variables analyzed this difference is not significant. Thus, this plant has only one action on the amendment, only the levels of silt and organic matter in soils. The results of the multiple regression of the chromium content of the roots according to all soil variables studied did appear that among the studied variables included in the model, only the electrical conductivity and clay occur in the explanation of contents chromium in roots. The chromium content of the aerial parts analyzed by regression based on all studied soil variables allows us to see only the variables: electrical conductivity and content of chromium in the root portion involved in the explanation of the content chromium in the aerial part.Keywords: absorption, translocation, analysis of variance, chrome, Lygeum spartum, multiple regression, the soil variables
Procedia PDF Downloads 2701044 Portfolio Risk Management Using Quantum Annealing
Authors: Thomas Doutre, Emmanuel De Meric De Bellefon
Abstract:
This paper describes the application of local-search metaheuristic quantum annealing to portfolio opti- mization. Heuristic technics are particularly handy when Markowitz’ classical Mean-Variance problem is enriched with additional realistic constraints. Once tailored to the problem, computational experiments on real collected data have shown the superiority of quantum annealing over simulated annealing for this constrained optimization problem, taking advantages of quantum effects such as tunnelling.Keywords: optimization, portfolio risk management, quantum annealing, metaheuristic
Procedia PDF Downloads 384