Search results for: prediction modelling
127 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 169126 Understanding the Factors Influencing Urban Ethiopian Consumers’ Consumption Intention of Spirulina-Supplemented Bread
Authors: Adino Andaregie, Isao Takagi, Hirohisa Shimura, Mitsuko Chikasada, Shinjiro Sato, Solomon Addisu
Abstract:
Context: The prevalence of undernutrition in developing countries like Ethiopia has become a significant issue. In this regard, finding alternative nutritional supplements seems to be a practical solution. Spirulina, a highly nutritious microalgae, offers a valuable option as it is a rich source of various essential nutrients. The study aimed to establish the factors affecting urban Ethiopian consumers' consumption intention of Spirulina-fortified bread. Research Aim: The primary purpose of this research is to identify the behavioral and socioeconomic factors impacting the intention of urban Ethiopian consumers to eat Spirulina-fortified bread. Methodology: The research utilized a quantitative approach wherein a structured questionnaire was created and distributed among 361 urban consumers via an online platform. The theory of planned behavior (TPB) was used as a conceptual framework, and confirmatory factor analysis (CFA) and structural equation modelling (SEM) were employed for data analysis. Findings: The study results revealed that attitude towards the supplement, subjective norms, and perceived behavioral control were the critical factors influencing the consumption intention of Spirulina-fortified bread. Moreover, age, physical exercise, and prior knowledge of Spirulina as a food ingredient were also found to have a significant influence. Theoretical Importance: The study contributes towards the understanding of consumer behavior and factors affecting the purchase intentions of Spirulina-fortified bread in urban Ethiopia. The use of TPB as a theoretical framework adds a vital aspect to the study as it provides helpful insights into the factors affecting intentions towards this functional food. Data Collection and Analysis Procedures: The data collection process involved the creation of a structured questionnaire, which was distributed online to urban Ethiopian consumers. Once data was collected, CFA and SEM were utilized to analyze the data and identify the factors impacting consumer behavior. Questions Addressed: The study aimed to address the following questions: (1) What are the behavioral and socioeconomic factors impacting urban Ethiopian consumers' consumption intention of Spirulina-fortified bread? (2) To what extent do attitude towards the supplement, subjective norms, and perceived behavioral control affect the purchase intention of Spirulina-fortified bread? (3) What role does age, education, income, physical exercise, and prior knowledge of Spirulina as a food ingredient play in the purchase intention of Spirulina-fortified bread among urban Ethiopian consumers? Conclusion: The study concludes that attitude towards the supplement, subjective norms, and perceived behavioral control are significant factors influencing urban Ethiopian consumers’ consumption intention of Spirulina-fortified bread. Moreover, age, education, income, physical exercise, and prior knowledge of Spirulina as a food ingredient also play a significant role in determining purchase intentions. The findings provide valuable insights for developing effective marketing strategies for Spirulina-fortified functional foods targeted at different consumer segments.Keywords: spirulina, consumption, factors, intention, consumers, behavior
Procedia PDF Downloads 84125 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition
Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman
Abstract:
Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat
Procedia PDF Downloads 147124 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland
Authors: Patrick Weber, Daniel Gredig
Abstract:
Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.Keywords: discrimination, high school students, gay men, predictors, Switzerland
Procedia PDF Downloads 330123 Birth Weight, Weight Gain and Feeding Pattern as Predictors for the Onset of Obesity in School Children
Authors: Thimira Pasas P, Nirmala Priyadarshani M, Ishani R
Abstract:
Obesity is a global health issue. Early identification is essential to plan interventions and intervene than to reduce the worsening of obesity and its consequences on the health issues of the individual. Childhood obesity is multifactorial, with both modifiable and unmodifiable risk factors. A genetically susceptible individual (unmodifiable), when placed in an obesogenic environment (modifiable), is likely to become obese in onset and progression. The present study was conducted to identify the age of onset of childhood obesity and the influence of modifiable risk factors for childhood obesity among school children living in a suburban area of Sri Lanka. The study population was aged 11-12 years of Piliyandala Educational Zone. Data were collected from 11–12-year-old school children attending government schools in the Piliyandala Educational Zone. They were using a validated, pre-tested self-administered questionnaire. A stratified random sampling method was performed to select schools and to select a representative sample to include all 3 types of government schools of students due to the prevailing pandemic situation, information from the last school medical inspection on data from 2020used for this purpose. For each obese child identified, 2 non-obese children were selected as controls. A single representative from the area was selected by using a systematic random sampling method with a sampling interval of 3. Data was collected using a validated, pre-tested self-administered questionnaire and the Child Health Development Record of the child. An introduction, which included explanations and instructions for filing the questionnaire, was carried out as a group activity prior to distributing the questionnaire among the sample. The results of the present study aligned with the hypothesis that the age of onset of childhood obesity and prediction must be within the first two years of child life. A total of 130 children (66 males: 64 females) participated in the study. The age of onset of obesity was seen to be within the first two years of life. The risk of obesity at 11-12 years of age was Obesity risk was identified at 3-time s higher among females who underwent rapid weight gain within their infancy period. Consuming milk prior to breakfast emerged as a risk factor that increases the risk of obesity by three times. The current study found that the drink before breakfast tends to increase the obesity risk by 3-folds, especially among obese females. Proper monitoring must be carried out to identify the rapid weight gain, especially within the first 2 years of life. Consumption of mug milk before breakfast tends to increase the obesity risk by 3 times. Identification of the confounding factors, proper awareness of the mothers/guardians and effective proper interventions need to be carried out to reduce the obesity risk among school children in the future.Keywords: childhood obesity, school children, age of onset, weight gain, feeding pattern, activity level
Procedia PDF Downloads 141122 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 177121 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 196120 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand
Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk
Abstract:
Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment
Procedia PDF Downloads 129119 Monitoring the Responses to Nociceptive Stimuli During General Anesthesia Based on Electroencephalographic Signals in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway (LMA)
Authors: Ofelia Loani Elvir Lazo, Roya Yumul, Sevan Komshian, Ruby Wang, Jun Tang
Abstract:
Background: Monitoring the anti-nociceptive drug effect is useful because a sudden and strong nociceptive stimulus may result in untoward autonomic responses and muscular reflex movements. Monitoring the anti-nociceptive effects of perioperative medications has long been desiredas a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively.To this end, electroencephalogram (EEG) based tools includingBIS and qCON were designed to provide information about the depth of sedation whileqNOXwas produced to informon the degree of antinociception.The goal of this study was to compare the reliability of qCON/qNOX to BIS asspecific indicators of response to nociceptive stimulation. Methods: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board(IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed o62n all patientsprior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. Results: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from74±13 mm Hg at baseline to 84± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76±12 BPM at baseline to 80±13BPM during noxious stimuli[p=0.078] respectively). Conclusion: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices.Keywords: antinociception, bispectral index (BIS), general anesthesia, laryngeal mask airway, qCON/qNOX
Procedia PDF Downloads 92118 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism
Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran
Abstract:
Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.Keywords: CT PA, D dimer, pulmonary embolism, wells score
Procedia PDF Downloads 233117 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review
Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni
Abstract:
Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing
Procedia PDF Downloads 71116 The Impact of a Simulated Teaching Intervention on Preservice Teachers’ Sense of Professional Identity
Authors: Jade V. Rushby, Tony Loughland, Tracy L. Durksen, Hoa Nguyen, Robert M. Klassen
Abstract:
This paper reports a study investigating the development and implementation of an online multi-session ‘scenario-based learning’ (SBL) program administered to preservice teachers in Australia. The transition from initial teacher education to the teaching profession can present numerous cognitive and psychological challenges for early career teachers. Therefore, the identification of additional supports, such as scenario-based learning, that can supplement existing teacher education programs may help preservice teachers to feel more confident and prepared for the realities and complexities of teaching. Scenario-based learning is grounded in situated learning theory which holds that learning is most powerful when it is embedded within its authentic context. SBL exposes participants to complex and realistic workplace situations in a supportive environment and has been used extensively to help prepare students in other professions, such as legal and medical education. However, comparatively limited attention has been paid to investigating the effects of SBL in teacher education. In the present study, the SBL intervention provided participants with the opportunity to virtually engage with school-based scenarios, reflect on how they might respond to a series of plausible response options, and receive real-time feedback from experienced educators. The development process involved several stages, including collaboration with experienced educators to determine the scenario content based on ‘critical incidents’ they had encountered during their teaching careers, the establishment of the scoring key, the development of the expert feedback, and an extensive review process to refine the program content. The 4-part SBL program focused on areas that can be challenging in the beginning stages of a teaching career, including managing student behaviour and workload, differentiating the curriculum, and building relationships with colleagues, parents, and the community. Results from prior studies implemented by the research group using a similar 4-part format have shown a statistically significant increase in preservice teachers’ self-efficacy and classroom readiness from the pre-test to the final post-test. In the current research, professional teaching identity - incorporating self-efficacy, motivation, self-image, satisfaction, and commitment to teaching - was measured over six weeks at multiple time points: before, during, and after the 4-part scenario-based learning program. Analyses included latent growth curve modelling to assess the trajectory of change in the outcome variables throughout the intervention. The paper outlines (1) the theoretical underpinnings of SBL, (2) the development of the SBL program and methodology, and (3) the results from the study, including the impact of the SBL program on aspects of participating preservice teachers’ professional identity. The study shows how SBL interventions can be implemented alongside the initial teacher education curriculum to help prepare preservice teachers for the transition from student to teacher.Keywords: classroom simulations, e-learning, initial teacher education, preservice teachers, professional learning, professional teaching identity, scenario-based learning, teacher development
Procedia PDF Downloads 72115 Balancing Biodiversity and Agriculture: A Broad-Scale Analysis of the Land Sparing/Land Sharing Trade-Off for South African Birds
Authors: Chevonne Reynolds, Res Altwegg, Andrew Balmford, Claire N. Spottiswoode
Abstract:
Modern agriculture has revolutionised the planet’s capacity to support humans, yet has simultaneously had a greater negative impact on biodiversity than any other human activity. Balancing the demand for food with the conservation of biodiversity is one of the most pressing issues of our time. Biodiversity-friendly farming (‘land sharing’), or alternatively, separation of conservation and production activities (‘land sparing’), are proposed as two strategies for mediating the trade-off between agriculture and biodiversity. However, there is much debate regarding the efficacy of each strategy, as this trade-off has typically been addressed by short term studies at fine spatial scales. These studies ignore processes that are relevant to biodiversity at larger scales, such as meta-population dynamics and landscape connectivity. Therefore, to better understand species response to agricultural land-use and provide evidence to underpin the planning of better production landscapes, we need to determine the merits of each strategy at larger scales. In South Africa, a remarkable citizen science project - the South African Bird Atlas Project 2 (SABAP2) – collates an extensive dataset describing the occurrence of birds at a 5-min by 5-min grid cell resolution. We use these data, along with fine-resolution data on agricultural land-use, to determine which strategy optimises the agriculture-biodiversity trade-off in a southern African context, and at a spatial scale never considered before. To empirically test this trade-off, we model bird species population density, derived for each 5-min grid cell by Royle-Nicols single-species occupancy modelling, against both the amount and configuration of different types of agricultural production in the same 5-min grid cell. In using both production amount and configuration, we can show not only how species population densities react to changes in yield, but also describe the production landscape patterns most conducive to conservation. Furthermore, the extent of both the SABAP2 and land-cover datasets allows us to test this trade-off across multiple regions to determine if bird populations respond in a consistent way and whether results can be extrapolated to other landscapes. We tested the land sparing/sharing trade-off for 281 bird species across three different biomes in South Africa. Overall, a higher proportion of species are classified as losers, and would benefit from land sparing. However, this proportion of loser-sparers is not consistent and varies across biomes and the different types of agricultural production. This is most likely because of differences in the intensity of agricultural land-use and the interactions between the differing types of natural vegetation and agriculture. Interestingly, we observe a higher number of species that benefit from agriculture than anticipated, suggesting that agriculture is a legitimate resource for certain bird species. Our results support those seen at smaller scales and across vastly different agricultural systems, that land sparing benefits the most species. However, our analysis suggests that land sparing needs to be implemented at spatial scales much larger than previously considered. Species persistence in agricultural landscapes will require the conservation of large tracts of land, and is an important consideration in developing countries, which are undergoing rapid agricultural development.Keywords: agriculture, birds, land sharing, land sparing
Procedia PDF Downloads 209114 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications
Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray
Abstract:
The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model
Procedia PDF Downloads 129113 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models
Authors: Haya Salah, Srinivas Sharan
Abstract:
Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time
Procedia PDF Downloads 122112 Stakeholder Mapping and Requirements Identification for Improving Traceability in the Halal Food Supply Chain
Authors: Laila A. H. F. Dashti, Tom Jackson, Andrew West, Lisa Jackson
Abstract:
Traceability systems are important in the agri-food and halal food sectors for monitoring ingredient movements, tracking sources, and ensuring food integrity. However, designing a traceability system for the halal food supply chain is challenging due to diverse stakeholder requirements and complex needs. Existing literature on stakeholder mapping and identifying requirements for halal food supply chains is limited. To address this gap, a pilot study was conducted to identify the objectives, requirements, and recommendations of stakeholders in the Kuwaiti halal food industry. The study collected data through semi-structured interviews with an international halal food manufacturer based in Kuwait. The aim was to gain a deep understanding of stakeholders' objectives, requirements, processes, and concerns related to the design of a traceability system in the country's halal food sector. Traceability systems are being developed and tested in the agri-food and halal food sectors due to their ability to monitor ingredient movements, track sources, and detect potential issues related to food integrity. Designing a traceability system for the halal food supply chain poses significant challenges due to diverse stakeholder requirements and the complexity of their needs (including varying food ingredients, different sources, destinations, supplier processes, certifications, etc.). Achieving a halal food traceability solution tailored to stakeholders' requirements within the supply chain necessitates prior knowledge of these needs. Although attempts have been made to address design-related issues in traceability systems, literature on stakeholder mapping and identification of requirements specific to halal food supply chains is scarce. Thus, this pilot study aims to identify the objectives, requirements, and recommendations of stakeholders in the halal food industry. The paper presents insights gained from the pilot study, which utilized semi-structured interviews to collect data from a Kuwait-based international halal food manufacturer. The objective was to gain an in-depth understanding of stakeholders' objectives, requirements, processes, and concerns pertaining to the design of a traceability system in Kuwait's halal food sector. The stakeholder mapping results revealed that government entities, food manufacturers, retailers, and suppliers are key stakeholders in Kuwait's halal food supply chain. Lessons learned from this pilot study regarding requirement capture for traceability systems include the need to streamline communication, focus on communication at each level of the supply chain, leverage innovative technologies to enhance process structuring and operations and reduce halal certification costs. The findings also emphasized the limitations of existing traceability solutions, such as limited cooperation and collaboration among stakeholders, high costs of implementing traceability systems without government support, lack of clarity regarding product routes, and disrupted communication channels between stakeholders. These findings contribute to a broader research program aimed at developing a stakeholder requirements framework that utilizes "business process modelling" to establish a unified model for traceable stakeholder requirements.Keywords: supply chain, traceability system, halal food, stakeholders’ requirements
Procedia PDF Downloads 115111 Evaluating Gender Sensitivity and Policy: Case Study of an EFL Textbook in Armenia
Authors: Ani Kojoyan
Abstract:
Linguistic studies have been investigating a connection between gender and linguistic development since 1970s. Scholars claim that gender differences in first and second language learning are socially constructed. Recent studies to language learning and gender reveal that second language acquisition is also a social phenomenon directly influencing one’s gender identity. Those responsible for designing language learning-teaching materials should be encouraged to understand the importance of and address the gender sensitivity accurately in textbooks. Writing or compiling a textbook is not an easy task; it requires strong academic abilities, patience, and experience. For a long period of time Armenia has been involved in the compilation process of a number of foreign language textbooks. However, there have been very few discussions or evaluations of those textbooks which will allow specialists to theorize that practice. The present paper focuses on the analysis of gender sensitivity issues and policy aspects involved in an EFL textbook. For the research the following material has been considered – “A Basic English Grammar: Morphology”, first printed in 2011. The selection of the material is not accidental. First, the mentioned textbook has been widely used in university teaching over years. Secondly, in Armenia “A Basic English Grammar: Morphology” has considered one of the most successful English grammar textbooks in a university teaching environment and served a source-book for other authors to compile and design their textbooks. The present paper aims to find out whether an EFL textbook is gendered in the Armenian teaching environment, and whether the textbook compilers are aware of gendered messages while compiling educational materials. It also aims at investigating students’ attitude toward the gendered messages in those materials. And finally, it also aims at increasing the gender sensitivity among book compilers and educators in various educational settings. For this study qualitative and quantitative research methods of analyses have been applied, the quantitative – in terms of carrying out surveys among students (45 university students, 18-25 age group), and the qualitative one – by discourse analysis of the material and conducting in-depth and semi-structured interviews with the Armenian compilers of the textbook (interviews with 3 authors). The study is based on passive and active observations and teaching experience done in a university classroom environment in 2014-2015, 2015-2016. The findings suggest that the discussed and analyzed teaching materials (145 extracts and examples) include traditional examples of intensive use of language and role-modelling, particularly, men are mostly portrayed as active, progressive, aggressive, whereas women are often depicted as passive and weak. These modeled often serve as a ‘reliable basis’ for reinforcing the traditional roles that have been projected on female and male students. The survey results also show that such materials contribute directly to shaping learners’ social attitudes and expectations around issues of gender. The applied techniques and discussed issues can be generalized and applied to other foreign language textbook compilation processes, since those principles, regardless of a language, are mostly the same.Keywords: EFL textbooks, gender policy, gender sensitivity, qualitative and quantitative research methods
Procedia PDF Downloads 195110 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming
Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter
Abstract:
High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.Keywords: hyperelastic, anisotropic, polymer film, thermoforming
Procedia PDF Downloads 618109 The Effects of Goal Setting and Feedback on Inhibitory Performance
Authors: Mami Miyasaka, Kaichi Yanaoka
Abstract:
Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control
Procedia PDF Downloads 104108 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 70107 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 234106 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia
Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger
Abstract:
Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia
Procedia PDF Downloads 74105 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study
Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem
Abstract:
Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.Keywords: preeclampsia, incidence, risk factors, maternal
Procedia PDF Downloads 141104 Accelerating Personalization Using Digital Tools to Drive Circular Fashion
Authors: Shamini Dhana, G. Subrahmanya VRK Rao
Abstract:
The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.Keywords: circular fashion, deep learning, digital technology platform, personalization
Procedia PDF Downloads 66103 Effects of Exposure to a Language on Perception of Non-Native Phonologically Contrastive Duration
Authors: Chuyu Huang, Itsuki Minemi, Kuanlin Chen, Yuki Hirose
Abstract:
It remains unclear how language speakers are able to perceive phonological contrasts that do not exist on their own. This experiment uses the vowel-length distinction in Japanese, which is phonologically contrastive and co-occurs with tonal change in some cases. For speakers whose first language does not distinguish vowel length, contrastive duration is usually misperceived, e.g., Mandarin speakers. Two alternative hypotheses for how Mandarin speakers would perceive a phonological contrast that does not exist in their language make different predictions. The stress parameter model does not have a clear prediction about the impact of tonal type. Mandarin speakers will likely be not able to perceive vowel length as well as Japanese native speakers do, but the performance might not correlate to tonal type because the prosody of their language is distinctive, which requires users to encode lexical prosody and notice subtle differences in word prosody. By contrast, cue-based phonetic models predict that Mandarin speakers may rely on pitch differences, a secondary cue, to perceive vowel length. Two groups of Mandarin speakers, including naive non-Japanese speakers and beginner learners, were recruited to participate in an AX discrimination task involving two Japanese sound stimuli that contain a phonologically contrastive environment. Participants were asked to indicate whether the two stimuli containing a vowel-length contrast (e.g., maapero vs. mapero) sound the same. The experiment was bifactorial. The first factor contrasted three syllabic positions (syllable position; initial/medial/final), as it would be likely to affect the perceptual difficulty, as seen in previous studies, and the second factor contrasted two pitch types (accent type): one with accentual change that could be distinguished with the lexical tones in Mandarin (the different condition), with the other group having no tonal distinction but only differing in vowel length (the same condition). The overall results showed that a significant main effect of accent type by applying a linear mixed-effects model (β = 1.48, SE = 0.35, p < 0.05), which implies that Mandarin speakers tend to more successfully recognize vowel-length differences when the long vowel counterpart takes on a tone that exists in Mandarin. The interaction between the accent type and the syllabic position is also significant (β = 2.30, SE = 0.91, p < 0.05), showing that vowel lengths in the different conditions are more difficult to recognize in the word-final case relative to the initial condition. The second statistical model, which compares naive speakers to beginners, was conducted with logistic regression to test the effects of the participant group. A significant difference was found between the two groups (β = 1.06, 95% CI = [0.36, 2.03], p < 0.05). This study shows that: (1) Mandarin speakers are likely to use pitch cues to perceive vowel length in a non-native language, which is consistent with the cue-based approaches; (2) an exposure effect was observed: the beginner group achieved a higher accuracy for long vowel perception, which implied the exposure effect despite the short period of language learning experience.Keywords: cue-based perception, exposure effect, prosodic perception, vowel duration
Procedia PDF Downloads 220102 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 96101 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 148100 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.Keywords: bio-economy, investment risk, circular design, economic modelling
Procedia PDF Downloads 10199 Modelling Pest Immigration into Rape Seed Crops under Past and Future Climate Conditions
Authors: M. Eickermann, F. Ronellenfitsch, J. Junk
Abstract:
Oilseed rape (Brassica napus L.) is one of the most important crops throughout Europe, but pressure due to pest insects and pathogens can reduce yield amount substantially. Therefore, the usage of pesticide applications is outstanding in this crop. In addition, climate change effects can interact with phenology of the host plant and their pests and can apply additional pressure on the yield. Next to the pollen beetle, Meligethes aeneus L., the seed-damaging pest insects, cabbage seed weevil (Ceutorhynchus obstrictus Marsham) and the brassica pod midge (Dasineura brassicae Winn.) are of main economic impact to the yield. While females of C. obstrictus are infesting oilseed rape by depositing single eggs into young pods, the females of D. brassicae are using this local damage in the pod for their own oviposition, while depositing batches of 20-30 eggs. Without a former infestation by the cabbage seed weevil, a significant yield reduction by the brassica pod midge can be denied. Based on long-term, multisided field experiments, a comprehensive data-set on pest migration to crops of B. napus has been built up in the last ten years. Five observational test sides, situated in different climatic regions in Luxembourg were controlled between February until the end of May twice a week. Pest migration was recorded by using yellow water pan-traps. Caught insects were identified in the laboratory according to species specific identification keys. By a combination of pest observations and corresponding meteorological observations, the set-up of models to predict the migration periods of the seed-damaging pests was possible. This approach is the basis for a computer-based decision support tool, to assist the farmer in identifying the appropriate time point of pesticide application. In addition, the derived algorithms of that decision support tool can be combined with climate change projections in order to assess the future potential threat caused by the seed-damaging pest species. Regional climate change effects for Luxembourg have been intensively studied in recent years. Significant changes to wetter winters and drier summers, as well as a prolongation of the vegetation period mainly caused by higher spring temperature, have also been reported. We used the COSMO-CLM model to perform a time slice experiment for Luxembourg with a spatial resolution of 1.3 km. Three ten year time slices were calculated: The reference time span (1991-2000), the near (2041-2050) and the far future (2091-2100). Our results projected a significant shift of pest migration to an earlier onset of the year. In addition, a prolongation of the possible migration period could be observed. Because D. brassiace is depending on the former oviposition activity by C. obstrictus to infest its host plant successfully, the future dependencies of both pest species will be assessed. Based on this approach the future risk potential of both seed-damaging pests is calculated and the status as pest species is characterized.Keywords: CORDEX projections, decision support tool, Brassica napus, pests
Procedia PDF Downloads 38298 Altering the Solid Phase Speciation of Arsenic in Paddy Soil: An Approach to Reduce Rice Grain Arsenic Uptake
Authors: Supriya Majumder, Pabitra Banik
Abstract:
Fates of Arsenic (As) on the soil-plant environment belong to the critical emerging issue, which in turn to appraises the threatening implications of a human health risk — assessing the dynamics of As in soil solid components are likely to impose its potential availability towards plant uptake. In the present context, we introduced an improved Sequential Extraction Procedure (SEP) questioning to identify solid-phase speciation of As in paddy soil under variable soil environmental conditions during two consecutive seasons of rice cultivation practices. We coupled gradients of water management practices with the addition of fertilizer amendments to assess the changes in a partition of As through a field experimental study during monsoon and post-monsoon season using two rice cultivars. Water management regimes were varied based on the methods of cultivation of rice by Conventional (waterlogged) vis-a-vis System of Rice Intensification-SRI (saturated). Fertilizer amendment through the nutrient treatment of absolute control, NPK-RD, NPK-RD + Calcium silicate, NPK-RD + Ferrous sulfate, Farmyard manure (FYM), FYM + Calcium silicate, FYM + Ferrous sulfate, Vermicompost (VC), VC + Calcium silicate, VC + Ferrous sulfate were selected to construct the study. After harvest, soil samples were sequentially extracted to estimate partition of As among the different fractions such as: exchangeable (F1), specifically sorbed (F2), As bound to amorphous Fe oxides (F3), crystalline Fe oxides (F4), organic matter (F5) and residual phase (F6). Results showed that the major proportions of As were found in F3, F4 and F6, whereas F1 exhibited the lowest proportion of total soil As. Among the nutrient treatment mediated changes on As fractions, the application of organic manure and ferrous sulfate were significantly found to restrict the release of As from exchangeable phase. Meanwhile, conventional practice produced much higher release of As from F1 as compared to SRI, which may substantially increase the environmental risk. In contrast, SRI practice was found to retain a significantly higher proportion of As in F2, F3, and F4 phase resulting restricted mobilization of As. This was critically reflected towards rice grain As bioavailability where the reduction in grain As concentration of 33% and 55% in SRI concerning conventional treatment (p <0.05) during monsoon and post-monsoon season respectively. Also, prediction assay for rice grain As bioavailability based on the linear regression model was performed. Results demonstrated that rice grain As concentration was positively correlated with As concentration in F1 and negatively correlated with F2, F3, and F4 with a satisfactory level of variation being explained (p <0.001). Finally, we conclude that F1, F2, F3 and F4 are the major soil. As fractions critically may govern the potential availability of As in soil and suggest that rice cultivation with the SRI treatment is particularly at less risk of As availability in soil. Such exhaustive information may be useful for adopting certain management practices for rice grown in contaminated soil concerning to the environmental issues in particular.Keywords: arsenic, fractionation, paddy soil, potential availability
Procedia PDF Downloads 125