Search results for: wildfire modeling
599 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 279598 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model
Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh
Abstract:
A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety
Procedia PDF Downloads 325597 Formation of Mg-Silicate Scales and Inhibition of Their Scale Formation at Injection Wells in Geothermal Power Plant
Authors: Samuel Abebe Ebebo
Abstract:
Scale precipitation causes a major issue for geothermal power plants because it reduces the production rate of geothermal energy. Each geothermal power plant's different chemical and physical conditions can cause the scale to precipitate under a particular set of fluid-rock interactions. Depending on the mineral, it is possible to have scale in the production well, steam separators, heat exchangers, reinjection wells, and everywhere in between. The scale consists mainly of smectite and trace amounts of chlorite, magnetite, quartz, hematite, dolomite, aragonite, and amorphous silica. The smectite scale is one of the difficult scales at injection wells in geothermal power plants. X-ray diffraction and chemical composition identify this smectite as Stevensite. The characteristics and the scale of each injection well line are different depending on the fluid chemistry. The smectite scale has been widely distributed in pipelines and surface plants. Mineral water equilibrium showed that the main factors controlling the saturation indices of smectite increased pH and dissolved Mg concentration due to the precipitate on the equipment surface. This study aims to characterize the scales and geothermal fluids collected from the Onuma geothermal power plant in Akita Prefecture, Japan. Field tests were conducted on October 30–November 3, 2021, at Onuma to determine the pH control methods for preventing magnesium silicate scaling, and as exemplified, the formation of magnesium silicate hydrates (M-S-H) with MgO to SiO2 ratios of 1.0 and pH values of 10 for one day has been studied at 25 °C. As a result, M-S-H scale formation could be suppressed, and stevensite formation could also be suppressed when we can decrease the pH of the fluid by less than 8.1, 7.4, and 8 (at 97 °C) in the fluid from O-3Rb and O-6Rb, O-10Rg, and O-12R, respectively. In this context, the scales and fluids collected from injection wells at a geothermal power plant in Japan were analyzed and characterized to understand the formation conditions of Mg-silicate scales with on-site synthesis experiments. From the results of the characterizations and on-site synthesis experiments, the inhibition method of their scale formation is discussed based on geochemical modeling in this study.Keywords: magnesium silicate, scaling, inhibitor, geothermal power plant
Procedia PDF Downloads 67596 An Analysis of the Performances of Various Buoys as the Floats of Wave Energy Converters
Authors: İlkay Özer Erselcan, Abdi Kükner, Gökhan Ceylan
Abstract:
The power generated by eight point absorber type wave energy converters each having a different buoy are calculated in order to investigate the performances of buoys in this study. The calculations are carried out by modeling three different sea states observed in two different locations in the Black Sea. The floats analyzed in this study have two basic geometries and four different draft/radius (d/r) ratios. The buoys possess the shapes of a semi-ellipsoid and a semi-elliptic paraboloid. Additionally, the draft/radius ratios range from 0.25 to 1 by an increment of 0.25. The radiation forces acting on the buoys due to the oscillatory motions of these bodies are evaluated by employing a 3D panel method along with a distribution of 3D pulsating sources in frequency domain. On the other hand, the wave forces acting on the buoys which are taken as the sum of Froude-Krylov forces and diffraction forces are calculated by using linear wave theory. Furthermore, the wave energy converters are assumed to be taut-moored to the seabed so that the secondary body which houses a power take-off system oscillates with much smaller amplitudes compared to the buoy. As a result, it is assumed that there is not any significant contribution to the power generation from the motions of the housing body and the only contribution to power generation comes from the buoy. The power take-off systems of the wave energy converters are high pressure oil hydraulic systems which are identical in terms of their characteristic parameters. The results show that the power generated by wave energy converters which have semi-ellipsoid floats is higher than that of those which have semi elliptic paraboloid floats in both locations and in all sea states. It is also determined that the power generated by the wave energy converters follow an unsteady pattern such that they do not decrease or increase with changing draft/radius ratios of the floats. Although the highest power level is obtained with a semi-ellipsoid float which has a draft/radius ratio equal to 1, other floats of which the draft/radius ratio is 0.25 delivered higher power that the floats with a draft/radius ratio equal to 1 in some cases.Keywords: Black Sea, buoys, hydraulic power take-off system, wave energy converters
Procedia PDF Downloads 352595 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation
Authors: Fidelia A. Orji, Julita Vassileva
Abstract:
This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning
Procedia PDF Downloads 130594 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study
Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao
Abstract:
Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.Keywords: physical activity, gestational diabetes, self-efficacy, predictors
Procedia PDF Downloads 103593 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study
Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte
Abstract:
Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.Keywords: application fraud scorecard, predictive modeling, regression, telecommunications
Procedia PDF Downloads 121592 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 77591 From Text to Data: Sentiment Analysis of Presidential Election Political Forums
Authors: Sergio V Davalos, Alison L. Watkins
Abstract:
User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.Keywords: sentiment analysis, text mining, user generated content, US presidential elections
Procedia PDF Downloads 192590 Subsidiary Entrepreneurial Orientation, Trust in Headquarters and Performance: The Mediating Role of Autonomy
Authors: Zhang Qingzhong
Abstract:
Though there exists an increasing number of research studies on the headquarters-subsidiary relationship, and within this context, there is a focus on subsidiaries' contributory role to multinational corporations (MNC), subsidiary autonomy, and the conditions under which autonomy exerts an effect on subsidiary performance still constitute a subject of debate in the literature. The objective of this research is to study the MNC subsidiary autonomy and performance relationship and the effect of subsidiary entrepreneurial orientation and trust on subsidiary autonomy in the China environment, a phenomenon that has not yet been studied. The research addresses the following three questions: (i) Is subsidiary autonomy associated with MNC subsidiary performance in the China environment? (ii) How do subsidiary entrepreneurship and its trust in headquarters affect the level of subsidiary autonomy and its relationship with subsidiary performance? (iii) Does subsidiary autonomy have a mediating effect on subsidiary performance with subsidiary’s entrepreneurship and trust in headquarters? In the present study, we have reviewed literature and conducted semi-structured interviews with multinational corporation (MNC) subsidiary senior executives in China. Building on our insights from the interviews and taking perspectives from four theories, namely the resource-based view (RBV), resource dependency theory, integration-responsiveness framework, and social exchange theory, as well as the extant articles on subsidiary autonomy, entrepreneurial orientation, trust, and subsidiary performance, we have developed a model and have explored the direct and mediating effects of subsidiary autonomy on subsidiary performance within the framework of the MNC. To test the model, we collected and analyzed data based on cross-industry two waves of an online survey from 102 subsidiaries of MNCs in China. We used structural equation modeling to test measurement, direct effect model, and conceptual framework with hypotheses. Our findings confirm that (a) subsidiary autonomy is positively related to subsidiary performance; (b) subsidiary entrepreneurial orientation is positively related to subsidiary autonomy; (c) subsidiary’s trust in headquarters has a positive effect on subsidiary autonomy; (d) subsidiary autonomy mediates the relationship between entrepreneurial orientation and subsidiary performance; (e) subsidiary autonomy mediates the relationship between trust and subsidiary performance. Our study highlights the important role of subsidiary autonomy in leveraging the resource of subsidiary entrepreneurial orientation and its trust relationship with headquarters to achieve high performance. We discuss the theoretical and managerial implications of the findings and propose directions for future research.Keywords: subsidiary entrepreneurial orientation, trust, subsidiary autonomy, subsidiary performance
Procedia PDF Downloads 188589 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling
Procedia PDF Downloads 281588 3D Codes for Unsteady Interaction Problems of Continuous Mechanics in Euler Variables
Authors: M. Abuziarov
Abstract:
The designed complex is intended for the numerical simulation of fast dynamic processes of interaction of heterogeneous environments susceptible to the significant formability. The main challenges in solving such problems are associated with the construction of the numerical meshes. Currently, there are two basic approaches to solve this problem. One is using of Lagrangian or Lagrangian Eulerian grid associated with the boundaries of media and the second is associated with the fixed Eulerian mesh, boundary cells of which cut boundaries of the environment medium and requires the calculation of these cut volumes. Both approaches require the complex grid generators and significant time for preparing the code’s data for simulation. In this codes these problems are solved using two grids, regular fixed and mobile local Euler Lagrange - Eulerian (ALE approach) accompanying the contact and free boundaries, the surfaces of shock waves and phase transitions, and other possible features of solutions, with mutual interpolation of integrated parameters. For modeling of both liquids and gases, and deformable solids the Godunov scheme of increased accuracy is used in Lagrangian - Eulerian variables, the same for the Euler equations and for the Euler- Cauchy, describing the deformation of the solid. The increased accuracy of the scheme is achieved by using 3D spatial time dependent solution of the discontinuity problem (3D space time dependent Riemann's Problem solver). The same solution is used to calculate the interaction at the liquid-solid surface (Fluid Structure Interaction problem). The codes does not require complex 3D mesh generators, only the surfaces of the calculating objects as the STL files created by means of engineering graphics are given by the user, which greatly simplifies the preparing the task and makes it convenient to use directly by the designer at the design stage. The results of the test solutions and applications related to the generation and extension of the detonation and shock waves, loading the constructions are presented.Keywords: fluid structure interaction, Riemann's solver, Euler variables, 3D codes
Procedia PDF Downloads 439587 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)
Authors: Tarek Duzan
Abstract:
Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data
Procedia PDF Downloads 94586 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations
Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.
Abstract:
Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.Keywords: gamma incomplete, ewes, shape curves, modeling
Procedia PDF Downloads 78585 The Relationship between Personal, Psycho-Social and Occupational Risk Factors with Low Back Pain Severity in Industrial Workers
Authors: Omid Giahi, Ebrahim Darvishi, Mahdi Akbarzadeh
Abstract:
Introduction: Occupational low back pain (LBP) is one of the most prevalent work-related musculoskeletal disorders in which a lot of risk factors are involved that. The present study focuses on the relation between personal, psycho-social and occupational risk factors and LBP severity in industrial workers. Materials and Methods: This research was a case-control study which was conducted in Kurdistan province. 100 workers (Mean Age ± SD of 39.9 ± 10.45) with LBP were selected as the case group, and 100 workers (Mean Age ± SD of 37.2 ± 8.5) without LBP were assigned into the control group. All participants were selected from various industrial units, and they had similar occupational conditions. The required data including demographic information (BMI, smoking, alcohol, and family history), occupational (posture, mental workload (MWL), force, vibration and repetition), and psychosocial factors (stress, occupational satisfaction and security) of the participants were collected via consultation with occupational medicine specialists, interview, and the related questionnaires and also the NASA-TLX software and REBA worksheet. Chi-square test, logistic regression and structural equation modeling (SEM) were used to analyze the data. For analysis of data, IBM Statistics SPSS 24 and Mplus6 software have been used. Results: 114 (77%) of the individuals were male and 86 were (23%) female. Mean Career length of the Case Group and Control Group were 10.90 ± 5.92, 9.22 ± 4.24, respectively. The statistical analysis of the data revealed that there was a significant correlation between the Posture, Smoking, Stress, Satisfaction, and MWL with occupational LBP. The odds ratios (95% confidence intervals) derived from a logistic regression model were 2.7 (1.27-2.24) and 2.5 (2.26-5.17) and 3.22 (2.47-3.24) for Stress, MWL, and Posture, respectively. Also, the SEM analysis of the personal, psycho-social and occupational factors with LBP revealed that there was a significant correlation. Conclusion: All three broad categories of risk factors simultaneously increase the risk of occupational LBP in the workplace. But, the risks of Posture, Stress, and MWL have a major role in LBP severity. Therefore, prevention strategies for persons in jobs with high risks for LBP are required to decrease the risk of occupational LBP.Keywords: industrial workers occupational, low back pain, occupational risk factors, psychosocial factors
Procedia PDF Downloads 258584 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques
Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan
Abstract:
A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle
Procedia PDF Downloads 320583 Material Concepts and Processing Methods for Electrical Insulation
Authors: R. Sekula
Abstract:
Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.Keywords: curing, epoxy insulation, numerical simulations, recycling
Procedia PDF Downloads 279582 Analysis and the Fair Distribution Modeling of Urban Facilities in Kabul City
Authors: Ansari Mohammad Reza, Hiroko Ono, Fakhrullah Sarwari
Abstract:
Our world is fast heading toward being a predominantly urban planet. This can be a double-edged sword reality where it is as much frightening as it seems interesting. Moreover, a look to the current predictions and taking into the consideration the fact that about 90 percent of the coming urbanization is going to be absorbed by the towns and the cities of the developing countries of Asia and Africa, directly provide us the clues to assume a much more tragic ending to this story than to the happy one. Likewise, in a situation wherein most of these countries are still severely struggling to find the proper answer to their very first initial questions of urbanization—e.g. how to provide the essential structure for their cities, define the regulation, or even design the proper pattern on how the cities should be expanded—thus it is not weird to claim that most of the coming urbanization of the world is going to happen informally. This reality could not only bring the feature, landscape or the picture of the cities of the future under the doubt but at the same time provide the ground for the rise of a bunch of other essential questions of how the facilities would be distributed in these cities, or how fair will this pattern of distribution be. Kabul the capital of Afghanistan, as a city located in the developing world that its process of urbanization has been starting since 2001 and currently hold the position to be the fifth fastest growing city in the world, contained to a considerable slum ratio of 0.7—that means about 70 percent of its population is living in the informal areas—subsequently could be a very good case study to put this questions into the research and find out how the informal development of a city can lead to the unfair and unbalanced distribution of its facilities. Likewise, in this study we tried our best to first propose the ideal model for the fair distribution of the facilities in the Kabul city—where all the citizens have the same equal chance of access to the facilities—and then evaluate the situation of the city based on how fair the facilities are currently distributed therein. We subsequently did it by the comparative analysis between the existing facility rate in the formal and informal areas of the city to the one that was proposed as the fair ideal model.Keywords: Afghanistan, facility distribution, formal settlements, informal settlements, Kabul
Procedia PDF Downloads 121581 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 382580 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri
Authors: Shishay Kidanu, Abdullah Alhaj
Abstract:
Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri
Procedia PDF Downloads 74579 Distant Speech Recognition Using Laser Doppler Vibrometer
Authors: Yunbin Deng
Abstract:
Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR
Procedia PDF Downloads 180578 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 358577 The Importance of including All Data in a Linear Model for the Analysis of RNAseq Data
Authors: Roxane A. Legaie, Kjiana E. Schwab, Caroline E. Gargett
Abstract:
Studies looking at the changes in gene expression from RNAseq data often make use of linear models. It is also common practice to focus on a subset of data for a comparison of interest, leaving aside the samples not involved in this particular comparison. This work shows the importance of including all observations in the modeling process to better estimate variance parameters, even when the samples included are not directly used in the comparison under test. The human endometrium is a dynamic tissue, which undergoes cycles of growth and regression with each menstrual cycle. The mesenchymal stem cells (MSCs) present in the endometrium are likely responsible for this remarkable regenerative capacity. However recent studies suggest that MSCs also plays a role in the pathogenesis of endometriosis, one of the most common medical conditions affecting the lower abdomen in women in which the endometrial tissue grows outside the womb. In this study we compared gene expression profiles between MSCs and non-stem cell counterparts (‘non-MSC’) obtained from women with (‘E’) or without (‘noE’) endometriosis from RNAseq. Raw read counts were used for differential expression analysis using a linear model with the limma-voom R package, including either all samples in the study or only the samples belonging to the subset of interest (e.g. for the comparison ‘E vs noE in MSC cells’, including only MSC samples from E and noE patients but not the non-MSC ones). Using the full dataset we identified about 100 differentially expressed (DE) genes between E and noE samples in MSC samples (adj.p-val < 0.05 and |logFC|>1) while only 9 DE genes were identified when using only the subset of data (MSC samples only). Important genes known to be involved in endometriosis such as KLF9 and RND3 were missed in the latter case. When looking at the MSC vs non-MSC cells comparison, the linear model including all samples identified 260 genes for noE samples (including the stem cell marker SUSD2) while the subset analysis did not identify any DE genes. When looking at E samples, 12 genes were identified with the first approach and only 1 with the subset approach. Although the stem cell marker RGS5 was found in both cases, the subset test missed important genes involved in stem cell differentiation such as NOTCH3 and other potentially related genes to be used for further investigation and pathway analysis.Keywords: differential expression, endometriosis, linear model, RNAseq
Procedia PDF Downloads 432576 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”
Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen
Abstract:
Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval
Procedia PDF Downloads 171575 A System Dynamics Approach for Assessing Policy Impacts on Closed-Loop Supply Chain Efficiency: A Case Study on Electric Vehicle Batteries
Authors: Guannan Ren, Thomas Mazzuchi, Shahram Sarkani
Abstract:
Electric vehicle battery recycling has emerged as a critical process in the transition toward sustainable transportation. As the demand for electric vehicles continues to rise, so does the need to address the end-of-life management of their batteries. Electric vehicle battery recycling benefits resource recovery and supply chain stability by reclaiming valuable metals like lithium, cobalt, nickel, and graphite. The reclaimed materials can then be reintroduced into the battery manufacturing process, reducing the reliance on raw material extraction and the environmental impacts of waste. Current battery recycling rates are insufficient to meet the growing demands for raw materials. While significant progress has been made in electric vehicle battery recycling, many areas can still improve. Standardization of battery designs, increased collection and recycling infrastructures, and improved efficiency in recycling processes are essential for scaling up recycling efforts and maximizing material recovery. This work delves into key factors, such as regulatory frameworks, economic incentives, and technological processes, that influence the cost-effectiveness and efficiency of battery recycling systems. A system dynamics model that considers variables such as battery production rates, demand and price fluctuations, recycling infrastructure capacity, and the effectiveness of recycling processes is created to study how these variables are interconnected, forming feedback loops that affect the overall supply chain efficiency. Such a model can also help simulate the effects of stricter regulations on battery disposal, incentives for recycling, or investments in research and development for battery designs and advanced recycling technologies. By using the developed model, policymakers, industry stakeholders, and researchers may gain insights into the effects of applying different policies or process updates on electric vehicle battery recycling rates.Keywords: environmental engineering, modeling and simulation, circular economy, sustainability, transportation science, policy
Procedia PDF Downloads 93574 Development and Validation of Cylindrical Linear Oscillating Generator
Authors: Sungin Jeong
Abstract:
This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator
Procedia PDF Downloads 195573 Assessing Impacts of Climate Variability and Change on Water Productivity and Nutrient Use Efficiency of Maize in the Semi-arid Central Rift Valley of Ethiopia
Authors: Fitih Ademe, Kibebew Kibret, Sheleme Beyene, Mezgebu Getnet, Gashaw Meteke
Abstract:
Changes in precipitation, temperature and atmospheric CO2 concentration are expected to alter agricultural productivity patterns worldwide. The interactive effects of soil moisture and nutrient availability are the two key edaphic factors that determine crop yield and are sensitive to climatic changes. The study assessed the potential impacts of climate change on maize yield and corresponding water productivity and nutrient use efficiency under climate change scenarios for the Central Rift Valley of Ethiopia by mid (2041-2070) and end century (2071-2100). Projected impacts were evaluated using climate scenarios generated from four General Circulation Models (GCMs) dynamically downscaled by the Swedish RCA4 Regional Climate Model (RCM) in combination with two Representative Concentration Pathways (RCP 4.5 and RCP8.5). Decision Support System for Agro-technology Transfer cropping system model (DSSAT-CSM) was used to simulate yield, water and nutrient use for the study periods. Results indicate that rainfed maize yield might decrease on average by 16.5 and 23% by the 2050s and 2080s, respectively, due to climate change. Water productivity is expected to decline on average by 2.2 and 12% in the CRV by mid and end centuries with respect to the baseline. Nutrient uptake and corresponding nutrient use efficiency (NUE) might also be negatively affected by climate change. Phosphorus uptake probably will decrease in the CRV on average by 14.5 to 18% by 2050s, while N uptake may not change significantly at Melkassa. Nitrogen and P use efficiency indicators showed decreases in the range between 8.5 to 10.5% and between 9.3 to 10.5%, respectively, by 2050s relative to the baseline average. The simulation results further indicated that a combination of increased water availability and optimum nutrient application might increase both water productivity and nutrient use efficiency in the changed climate, which can ensure modest production in the future. Potential options that can improve water availability and nutrient uptake should be identified for the study locations using a crop modeling approach.Keywords: crop model, climate change scenario, nutrient uptake, nutrient use efficiency, water productivity
Procedia PDF Downloads 86572 Assessment of Students Skills in Error Detection in SQL Classes using Rubric Framework - An Empirical Study
Authors: Dirson Santos De Campos, Deller James Ferreira, Anderson Cavalcante Gonçalves, Uyara Ferreira Silva
Abstract:
Rubrics to learning research provide many evaluation criteria and expected performance standards linked to defined student activity for learning and pedagogical objectives. Despite the rubric being used in education at all levels, academic literature on rubrics as a tool to support research in SQL Education is quite rare. There is a large class of SQL queries is syntactically correct, but certainly, not all are semantically correct. Detecting and correcting errors is a recurring problem in SQL education. In this paper, we usthe Rubric Abstract Framework (RAF), which consists of steps, that allows us to map the information to measure student performance guided by didactic objectives defined by the teacher as long as it is contextualized domain modeling by rubric. An empirical study was done that demonstrates how rubrics can mitigate student difficulties in finding logical errors and easing teacher workload in SQL education. Detecting and correcting logical errors is an important skill for students. Researchers have proposed several ways to improve SQL education because understanding this paradigm skills are crucial in software engineering and computer science. The RAF instantiation was using in an empirical study developed during the COVID-19 pandemic in database course. The pandemic transformed face-to-face and remote education, without presential classes. The lab activities were conducted remotely, which hinders the teaching-learning process, in particular for this research, in verifying the evidence or statements of knowledge, skills, and abilities (KSAs) of students. Various research in academia and industry involved databases. The innovation proposed in this paper is the approach used where the results obtained when using rubrics to map logical errors in query formulation have been analyzed with gains obtained by students empirically verified. The research approach can be used in the post-pandemic period in both classroom and distance learning.Keywords: rubric, logical error, structured query language (SQL), empirical study, SQL education
Procedia PDF Downloads 191571 Epigenetic Modifying Potential of Dietary Spices: Link to Cure Complex Diseases
Authors: Jeena Gupta
Abstract:
In the today’s world of pharmaceutical products, one should not forget the healing properties of inexpensive food materials especially spices. They are known to possess hidden pharmaceutical ingredients, imparting them the qualities of being anti-microbial, anti-oxidant, anti-inflammatory and anti-carcinogenic. Further aberrant epigenetic regulatory mechanisms like DNA methylation, histone modifications or altered microRNA expression patterns, which regulates gene expression without changing DNA sequence, contribute significantly in the development of various diseases. Changing lifestyles and diets exert their effect by influencing these epigenetic mechanisms which are thus the target of dietary phytochemicals. Bioactive components of plants have been in use since ages but their potential to reverse epigenetic alterations and prevention against diseases is yet to be explored. Spices being rich repositories of many bioactive constituents are responsible for providing them unique aroma and taste. Some spices like curcuma and garlic have been well evaluated for their epigenetic regulatory potential, but for others, it is largely unknown. We have evaluated the biological activity of phyto-active components of Fennel, Cardamom and Fenugreek by in silico molecular modeling, in vitro and in vivo studies. Ligand-based similarity studies were conducted to identify structurally similar compounds to understand their biological phenomenon. The database searching has been done by using Fenchone from fennel, Sabinene from cardamom and protodioscin from fenugreek as a query molecule in the different small molecule databases. Moreover, the results of the database searching exhibited that these compounds are having potential binding with the different targets found in the Protein Data Bank. Further in addition to being epigenetic modifiers, in vitro study had demonstrated the antimicrobial, antifungal, antioxidant and cytotoxicity protective effects of Fenchone, Sabinene and Protodioscin. To best of our knowledge, such type of studies facilitate the target fishing as well as making the roadmap in drug design and discovery process for identification of novel therapeutics.Keywords: epigenetics, spices, phytochemicals, fenchone
Procedia PDF Downloads 158570 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 17