Search results for: mathematical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5288

Search results for: mathematical modeling

728 An Analysis of the Performances of Various Buoys as the Floats of Wave Energy Converters

Authors: İlkay Özer Erselcan, Abdi Kükner, Gökhan Ceylan

Abstract:

The power generated by eight point absorber type wave energy converters each having a different buoy are calculated in order to investigate the performances of buoys in this study. The calculations are carried out by modeling three different sea states observed in two different locations in the Black Sea. The floats analyzed in this study have two basic geometries and four different draft/radius (d/r) ratios. The buoys possess the shapes of a semi-ellipsoid and a semi-elliptic paraboloid. Additionally, the draft/radius ratios range from 0.25 to 1 by an increment of 0.25. The radiation forces acting on the buoys due to the oscillatory motions of these bodies are evaluated by employing a 3D panel method along with a distribution of 3D pulsating sources in frequency domain. On the other hand, the wave forces acting on the buoys which are taken as the sum of Froude-Krylov forces and diffraction forces are calculated by using linear wave theory. Furthermore, the wave energy converters are assumed to be taut-moored to the seabed so that the secondary body which houses a power take-off system oscillates with much smaller amplitudes compared to the buoy. As a result, it is assumed that there is not any significant contribution to the power generation from the motions of the housing body and the only contribution to power generation comes from the buoy. The power take-off systems of the wave energy converters are high pressure oil hydraulic systems which are identical in terms of their characteristic parameters. The results show that the power generated by wave energy converters which have semi-ellipsoid floats is higher than that of those which have semi elliptic paraboloid floats in both locations and in all sea states. It is also determined that the power generated by the wave energy converters follow an unsteady pattern such that they do not decrease or increase with changing draft/radius ratios of the floats. Although the highest power level is obtained with a semi-ellipsoid float which has a draft/radius ratio equal to 1, other floats of which the draft/radius ratio is 0.25 delivered higher power that the floats with a draft/radius ratio equal to 1 in some cases.

Keywords: Black Sea, buoys, hydraulic power take-off system, wave energy converters

Procedia PDF Downloads 350
727 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation

Authors: Fidelia A. Orji, Julita Vassileva

Abstract:

This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.

Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning

Procedia PDF Downloads 127
726 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study

Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao

Abstract:

Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.

Keywords: physical activity, gestational diabetes, self-efficacy, predictors

Procedia PDF Downloads 99
725 A Contemporary Advertising Strategy on Social Networking Sites

Authors: M. S. Aparna, Pushparaj Shetty D.

Abstract:

Nowadays social networking sites have become so popular that the producers or the sellers look for these sites as one of the best options to target the right audience to market their products. There are several tools available to monitor or analyze the social networks. Our task is to identify the right community web pages and find out the behavior analysis of the members by using these tools and formulate an appropriate strategy to market the products or services to achieve the set goals. The advertising becomes more effective when the information of the product/ services come from a known source. The strategy explores great buying influence in the audience on referral marketing. Our methodology proceeds with critical budget analysis and promotes viral influence propagation. In this context, we encompass the vital bits of budget evaluation such as the number of optimal seed nodes or primary influential users activated onset, an estimate coverage spread of nodes and maximum influence propagating distance from an initial seed to an end node. Our proposal for Buyer Prediction mathematical model arises from the urge to perform complex analysis when the probability density estimates of reliable factors are not known or difficult to calculate. Order Statistics and Buyer Prediction mapping function guarantee the selection of optimal influential users at each level. We exercise an efficient tactics of practicing community pages and user behavior to determine the product enthusiasts on social networks. Our approach is promising and should be an elementary choice when there is little or no prior knowledge on the distribution of potential buyers on social networks. In this strategy, product news propagates to influential users on or surrounding networks. By applying the same technique, a user can search friends who are capable to advise better or give referrals, if a product interests him.

Keywords: viral marketing, social network analysis, community web pages, buyer prediction, influence propagation, budget constraints

Procedia PDF Downloads 261
724 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 68
723 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study

Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte

Abstract:

Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.

Keywords: application fraud scorecard, predictive modeling, regression, telecommunications

Procedia PDF Downloads 119
722 Fuzzy Logic-Based Approach to Predict Fault in Transformer Oil Based on Health Index Using Dissolved Gas Analysis

Authors: Kharisma Utomo Mulyodinoto, Suwarno, Ahmed Abu-Siada

Abstract:

Transformer insulating oil is a key component that can be utilized to detect incipient faults within operating transformers without taking them out of service. Dissolved gas-in-oil analysis has been widely accepted as a powerful technique to detect such incipient faults. While the measurement of dissolved gases within transformer oil samples has been standardized over the past two decades, analysis of the results is not always straightforward as it depends on personnel expertise more than mathematical formulas. In analyzing such data, the generation rate of each dissolved gas is of more concern than the absolute value of the gas. As such, history of dissolved gases within a particular transformer should be archived for future comparison. Lack of such history may lead to misinterpretation of the obtained results. IEEE C57.104-2008 standards have classified the health condition of the transformer based on the absolute value of individual dissolved gases along with the total dissolved combustible gas (TDCG) within transformer oil into 4 conditions. While the technique is easy to implement, it is considered as a very conservative technique and is not widely accepted as a reliable interpretation tool. Moreover, measured gases for the same oil sample can be within various conditions limits and hence, misinterpretation of the data is expected. To overcome this limitation, this paper introduces a fuzzy logic approach to predict the health condition of the transformer oil based on IEEE C57.104-2008 standards along with Roger ratio and IEC ratio-based methods. DGA results of 31 chosen oil samples from 469 transformer oil samples of normal transformers and pre-known fault-type transformers that were collected from Indonesia Electrical Utility Company, PT. PLN (Persero), from different voltage rating: 500/150 kV, 150/20 kV, and 70/20 kV; different capacity: 500 MVA, 60 MVA, 50 MVA, 30 MVA, 20 MVA, 15 MVA, and 10 MVA; and different lifespan, are used to test and establish the fuzzy logic model. Results show that the proposed approach is of good accuracy and can be considered as a platform toward the standardization of the dissolved gas interpretation process.

Keywords: dissolved gas analysis, fuzzy logic, health index, IEEE C57.104-2008, IEC ratio method, Roger ratio method

Procedia PDF Downloads 155
721 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 190
720 Subsidiary Entrepreneurial Orientation, Trust in Headquarters and Performance: The Mediating Role of Autonomy

Authors: Zhang Qingzhong

Abstract:

Though there exists an increasing number of research studies on the headquarters-subsidiary relationship, and within this context, there is a focus on subsidiaries' contributory role to multinational corporations (MNC), subsidiary autonomy, and the conditions under which autonomy exerts an effect on subsidiary performance still constitute a subject of debate in the literature. The objective of this research is to study the MNC subsidiary autonomy and performance relationship and the effect of subsidiary entrepreneurial orientation and trust on subsidiary autonomy in the China environment, a phenomenon that has not yet been studied. The research addresses the following three questions: (i) Is subsidiary autonomy associated with MNC subsidiary performance in the China environment? (ii) How do subsidiary entrepreneurship and its trust in headquarters affect the level of subsidiary autonomy and its relationship with subsidiary performance? (iii) Does subsidiary autonomy have a mediating effect on subsidiary performance with subsidiary’s entrepreneurship and trust in headquarters? In the present study, we have reviewed literature and conducted semi-structured interviews with multinational corporation (MNC) subsidiary senior executives in China. Building on our insights from the interviews and taking perspectives from four theories, namely the resource-based view (RBV), resource dependency theory, integration-responsiveness framework, and social exchange theory, as well as the extant articles on subsidiary autonomy, entrepreneurial orientation, trust, and subsidiary performance, we have developed a model and have explored the direct and mediating effects of subsidiary autonomy on subsidiary performance within the framework of the MNC. To test the model, we collected and analyzed data based on cross-industry two waves of an online survey from 102 subsidiaries of MNCs in China. We used structural equation modeling to test measurement, direct effect model, and conceptual framework with hypotheses. Our findings confirm that (a) subsidiary autonomy is positively related to subsidiary performance; (b) subsidiary entrepreneurial orientation is positively related to subsidiary autonomy; (c) subsidiary’s trust in headquarters has a positive effect on subsidiary autonomy; (d) subsidiary autonomy mediates the relationship between entrepreneurial orientation and subsidiary performance; (e) subsidiary autonomy mediates the relationship between trust and subsidiary performance. Our study highlights the important role of subsidiary autonomy in leveraging the resource of subsidiary entrepreneurial orientation and its trust relationship with headquarters to achieve high performance. We discuss the theoretical and managerial implications of the findings and propose directions for future research.

Keywords: subsidiary entrepreneurial orientation, trust, subsidiary autonomy, subsidiary performance

Procedia PDF Downloads 186
719 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network

Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio

Abstract:

The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.

Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling

Procedia PDF Downloads 280
718 3D Codes for Unsteady Interaction Problems of Continuous Mechanics in Euler Variables

Authors: M. Abuziarov

Abstract:

The designed complex is intended for the numerical simulation of fast dynamic processes of interaction of heterogeneous environments susceptible to the significant formability. The main challenges in solving such problems are associated with the construction of the numerical meshes. Currently, there are two basic approaches to solve this problem. One is using of Lagrangian or Lagrangian Eulerian grid associated with the boundaries of media and the second is associated with the fixed Eulerian mesh, boundary cells of which cut boundaries of the environment medium and requires the calculation of these cut volumes. Both approaches require the complex grid generators and significant time for preparing the code’s data for simulation. In this codes these problems are solved using two grids, regular fixed and mobile local Euler Lagrange - Eulerian (ALE approach) accompanying the contact and free boundaries, the surfaces of shock waves and phase transitions, and other possible features of solutions, with mutual interpolation of integrated parameters. For modeling of both liquids and gases, and deformable solids the Godunov scheme of increased accuracy is used in Lagrangian - Eulerian variables, the same for the Euler equations and for the Euler- Cauchy, describing the deformation of the solid. The increased accuracy of the scheme is achieved by using 3D spatial time dependent solution of the discontinuity problem (3D space time dependent Riemann's Problem solver). The same solution is used to calculate the interaction at the liquid-solid surface (Fluid Structure Interaction problem). The codes does not require complex 3D mesh generators, only the surfaces of the calculating objects as the STL files created by means of engineering graphics are given by the user, which greatly simplifies the preparing the task and makes it convenient to use directly by the designer at the design stage. The results of the test solutions and applications related to the generation and extension of the detonation and shock waves, loading the constructions are presented.

Keywords: fluid structure interaction, Riemann's solver, Euler variables, 3D codes

Procedia PDF Downloads 437
717 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 92
716 The Relationship between Personal, Psycho-Social and Occupational Risk Factors with Low Back Pain Severity in Industrial Workers

Authors: Omid Giahi, Ebrahim Darvishi, Mahdi Akbarzadeh

Abstract:

Introduction: Occupational low back pain (LBP) is one of the most prevalent work-related musculoskeletal disorders in which a lot of risk factors are involved that. The present study focuses on the relation between personal, psycho-social and occupational risk factors and LBP severity in industrial workers. Materials and Methods: This research was a case-control study which was conducted in Kurdistan province. 100 workers (Mean Age ± SD of 39.9 ± 10.45) with LBP were selected as the case group, and 100 workers (Mean Age ± SD of 37.2 ± 8.5) without LBP were assigned into the control group. All participants were selected from various industrial units, and they had similar occupational conditions. The required data including demographic information (BMI, smoking, alcohol, and family history), occupational (posture, mental workload (MWL), force, vibration and repetition), and psychosocial factors (stress, occupational satisfaction and security) of the participants were collected via consultation with occupational medicine specialists, interview, and the related questionnaires and also the NASA-TLX software and REBA worksheet. Chi-square test, logistic regression and structural equation modeling (SEM) were used to analyze the data. For analysis of data, IBM Statistics SPSS 24 and Mplus6 software have been used. Results: 114 (77%) of the individuals were male and 86 were (23%) female. Mean Career length of the Case Group and Control Group were 10.90 ± 5.92, 9.22 ± 4.24, respectively. The statistical analysis of the data revealed that there was a significant correlation between the Posture, Smoking, Stress, Satisfaction, and MWL with occupational LBP. The odds ratios (95% confidence intervals) derived from a logistic regression model were 2.7 (1.27-2.24) and 2.5 (2.26-5.17) and 3.22 (2.47-3.24) for Stress, MWL, and Posture, respectively. Also, the SEM analysis of the personal, psycho-social and occupational factors with LBP revealed that there was a significant correlation. Conclusion: All three broad categories of risk factors simultaneously increase the risk of occupational LBP in the workplace. But, the risks of Posture, Stress, and MWL have a major role in LBP severity. Therefore, prevention strategies for persons in jobs with high risks for LBP are required to decrease the risk of occupational LBP.

Keywords: industrial workers occupational, low back pain, occupational risk factors, psychosocial factors

Procedia PDF Downloads 257
715 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: curing, epoxy insulation, numerical simulations, recycling

Procedia PDF Downloads 277
714 Analysis and the Fair Distribution Modeling of Urban Facilities in Kabul City

Authors: Ansari Mohammad Reza, Hiroko Ono, Fakhrullah Sarwari

Abstract:

Our world is fast heading toward being a predominantly urban planet. This can be a double-edged sword reality where it is as much frightening as it seems interesting. Moreover, a look to the current predictions and taking into the consideration the fact that about 90 percent of the coming urbanization is going to be absorbed by the towns and the cities of the developing countries of Asia and Africa, directly provide us the clues to assume a much more tragic ending to this story than to the happy one. Likewise, in a situation wherein most of these countries are still severely struggling to find the proper answer to their very first initial questions of urbanization—e.g. how to provide the essential structure for their cities, define the regulation, or even design the proper pattern on how the cities should be expanded—thus it is not weird to claim that most of the coming urbanization of the world is going to happen informally. This reality could not only bring the feature, landscape or the picture of the cities of the future under the doubt but at the same time provide the ground for the rise of a bunch of other essential questions of how the facilities would be distributed in these cities, or how fair will this pattern of distribution be. Kabul the capital of Afghanistan, as a city located in the developing world that its process of urbanization has been starting since 2001 and currently hold the position to be the fifth fastest growing city in the world, contained to a considerable slum ratio of 0.7—that means about 70 percent of its population is living in the informal areas—subsequently could be a very good case study to put this questions into the research and find out how the informal development of a city can lead to the unfair and unbalanced distribution of its facilities. Likewise, in this study we tried our best to first propose the ideal model for the fair distribution of the facilities in the Kabul city—where all the citizens have the same equal chance of access to the facilities—and then evaluate the situation of the city based on how fair the facilities are currently distributed therein. We subsequently did it by the comparative analysis between the existing facility rate in the formal and informal areas of the city to the one that was proposed as the fair ideal model.

Keywords: Afghanistan, facility distribution, formal settlements, informal settlements, Kabul

Procedia PDF Downloads 118
713 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 381
712 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 73
711 Distant Speech Recognition Using Laser Doppler Vibrometer

Authors: Yunbin Deng

Abstract:

Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.

Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR

Procedia PDF Downloads 177
710 The Importance of including All Data in a Linear Model for the Analysis of RNAseq Data

Authors: Roxane A. Legaie, Kjiana E. Schwab, Caroline E. Gargett

Abstract:

Studies looking at the changes in gene expression from RNAseq data often make use of linear models. It is also common practice to focus on a subset of data for a comparison of interest, leaving aside the samples not involved in this particular comparison. This work shows the importance of including all observations in the modeling process to better estimate variance parameters, even when the samples included are not directly used in the comparison under test. The human endometrium is a dynamic tissue, which undergoes cycles of growth and regression with each menstrual cycle. The mesenchymal stem cells (MSCs) present in the endometrium are likely responsible for this remarkable regenerative capacity. However recent studies suggest that MSCs also plays a role in the pathogenesis of endometriosis, one of the most common medical conditions affecting the lower abdomen in women in which the endometrial tissue grows outside the womb. In this study we compared gene expression profiles between MSCs and non-stem cell counterparts (‘non-MSC’) obtained from women with (‘E’) or without (‘noE’) endometriosis from RNAseq. Raw read counts were used for differential expression analysis using a linear model with the limma-voom R package, including either all samples in the study or only the samples belonging to the subset of interest (e.g. for the comparison ‘E vs noE in MSC cells’, including only MSC samples from E and noE patients but not the non-MSC ones). Using the full dataset we identified about 100 differentially expressed (DE) genes between E and noE samples in MSC samples (adj.p-val < 0.05 and |logFC|>1) while only 9 DE genes were identified when using only the subset of data (MSC samples only). Important genes known to be involved in endometriosis such as KLF9 and RND3 were missed in the latter case. When looking at the MSC vs non-MSC cells comparison, the linear model including all samples identified 260 genes for noE samples (including the stem cell marker SUSD2) while the subset analysis did not identify any DE genes. When looking at E samples, 12 genes were identified with the first approach and only 1 with the subset approach. Although the stem cell marker RGS5 was found in both cases, the subset test missed important genes involved in stem cell differentiation such as NOTCH3 and other potentially related genes to be used for further investigation and pathway analysis.

Keywords: differential expression, endometriosis, linear model, RNAseq

Procedia PDF Downloads 430
709 Metal Extraction into Ionic Liquids and Hydrophobic Deep Eutectic Mixtures

Authors: E. E. Tereshatov, M. Yu. Boltoeva, V. Mazan, M. F. Volia, C. M. Folden III

Abstract:

Room temperature ionic liquids (RTILs) are a class of liquid organic salts with melting points below 20 °C that are considered to be environmentally friendly ‘designers’ solvents. Pure hydrophobic ILs are known to extract metallic species from aqueous solutions. The closest analogues of ionic liquids are deep eutectic solvents (DESs), which are a eutectic mixture of at least two compounds with a melting point lower than that of each individual component. DESs are acknowledged to be attractive for organic synthesis and metal processing. Thus, these non-volatile and less toxic compounds are of interest for critical metal extraction. The US Department of Energy and the European Commission consider indium as a key metal. Its chemical homologue, thallium, is also an important material for some applications and environmental safety. The aim of this work is to systematically investigate In and Tl extraction from aqueous solutions into pure fluorinated ILs and hydrophobic DESs. The dependence of the Tl extraction efficiency on the structure and composition of the ionic liquid ions, metal oxidation state, and initial metal and aqueous acid concentrations have been studied. The extraction efficiency of the TlXz3–z anionic species (where X = Cl– and/or Br–) is greater for ionic liquids with more hydrophobic cations. Unexpectedly high distribution ratios (> 103) of Tl(III) were determined even by applying a pure ionic liquid as receiving phase. An improved mathematical model based on ion exchange and ion pair formation mechanisms has been developed to describe the co-extraction of two different anionic species, and the relative contributions of each mechanism have been determined. The first evidence of indium extraction into new quaternary ammonium- and menthol-based hydrophobic DESs from hydrochloric and oxalic acid solutions with distribution ratios up to 103 will be provided. Data obtained allow us to interpret the mechanism of thallium and indium extraction into ILs and DESs media. The understanding of Tl and In chemical behavior in these new media is imperative for the further improvement of separation and purification of these elements.

Keywords: deep eutectic solvents, indium, ionic liquids, thallium

Procedia PDF Downloads 239
708 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 168
707 A System Dynamics Approach for Assessing Policy Impacts on Closed-Loop Supply Chain Efficiency: A Case Study on Electric Vehicle Batteries

Authors: Guannan Ren, Thomas Mazzuchi, Shahram Sarkani

Abstract:

Electric vehicle battery recycling has emerged as a critical process in the transition toward sustainable transportation. As the demand for electric vehicles continues to rise, so does the need to address the end-of-life management of their batteries. Electric vehicle battery recycling benefits resource recovery and supply chain stability by reclaiming valuable metals like lithium, cobalt, nickel, and graphite. The reclaimed materials can then be reintroduced into the battery manufacturing process, reducing the reliance on raw material extraction and the environmental impacts of waste. Current battery recycling rates are insufficient to meet the growing demands for raw materials. While significant progress has been made in electric vehicle battery recycling, many areas can still improve. Standardization of battery designs, increased collection and recycling infrastructures, and improved efficiency in recycling processes are essential for scaling up recycling efforts and maximizing material recovery. This work delves into key factors, such as regulatory frameworks, economic incentives, and technological processes, that influence the cost-effectiveness and efficiency of battery recycling systems. A system dynamics model that considers variables such as battery production rates, demand and price fluctuations, recycling infrastructure capacity, and the effectiveness of recycling processes is created to study how these variables are interconnected, forming feedback loops that affect the overall supply chain efficiency. Such a model can also help simulate the effects of stricter regulations on battery disposal, incentives for recycling, or investments in research and development for battery designs and advanced recycling technologies. By using the developed model, policymakers, industry stakeholders, and researchers may gain insights into the effects of applying different policies or process updates on electric vehicle battery recycling rates.

Keywords: environmental engineering, modeling and simulation, circular economy, sustainability, transportation science, policy

Procedia PDF Downloads 90
706 Development and Validation of Cylindrical Linear Oscillating Generator

Authors: Sungin Jeong

Abstract:

This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.

Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator

Procedia PDF Downloads 194
705 Disaster Management Supported by Unmanned Aerial Systems

Authors: Agoston Restas

Abstract:

Introduction: This paper describes many initiatives and shows also practical examples which happened recently using Unmanned Aerial Systems (UAS) to support disaster management. Since the operation of manned aircraft at disasters is usually not only expensive but often impossible to use as well, in many cases managers fail to use the aerial activity. UAS can be an alternative moreover cost-effective solution for supporting disaster management. Methods: This article uses thematic division of UAS applications; it is based on two key elements, one of them is the time flow of managing disasters, other is its tactical requirements. Logically UAS can be used like pre-disaster activity, activity immediately after the occurrence of a disaster and the activity after the primary disaster elimination. Paper faces different disasters, like dangerous material releases, floods, earthquakes, forest fires and human-induced disasters. Research used function analysis, practical experiments, mathematical formulas, economic analysis and also expert estimation. Author gathered international examples and used own experiences in this field as well. Results and discussion: An earthquake is a rapid escalating disaster, where, many times, there is no other way for a rapid damage assessment than aerial reconnaissance. For special rescue teams, the UAS application can help much in a rapid location selection, where enough place remained to survive for victims. Floods are typical for a slow onset disaster. In contrast, managing floods is a very complex and difficult task. It requires continuous monitoring of dykes, flooded and threatened areas. UAS can help managers largely keeping an area under observation. Forest fires are disasters, where the tactical application of UAS is already well developed. It can be used for fire detection, intervention monitoring and also for post-fire monitoring. In case of nuclear accident or hazardous material leakage, UAS is also a very effective or can be the only one tool for supporting disaster management. Paper shows some efforts using UAS to avoid human-induced disasters in low-income countries as part of health cooperation.

Keywords: disaster management, floods, forest fires, Unmanned Aerial Systems

Procedia PDF Downloads 236
704 Assessing Impacts of Climate Variability and Change on Water Productivity and Nutrient Use Efficiency of Maize in the Semi-arid Central Rift Valley of Ethiopia

Authors: Fitih Ademe, Kibebew Kibret, Sheleme Beyene, Mezgebu Getnet, Gashaw Meteke

Abstract:

Changes in precipitation, temperature and atmospheric CO2 concentration are expected to alter agricultural productivity patterns worldwide. The interactive effects of soil moisture and nutrient availability are the two key edaphic factors that determine crop yield and are sensitive to climatic changes. The study assessed the potential impacts of climate change on maize yield and corresponding water productivity and nutrient use efficiency under climate change scenarios for the Central Rift Valley of Ethiopia by mid (2041-2070) and end century (2071-2100). Projected impacts were evaluated using climate scenarios generated from four General Circulation Models (GCMs) dynamically downscaled by the Swedish RCA4 Regional Climate Model (RCM) in combination with two Representative Concentration Pathways (RCP 4.5 and RCP8.5). Decision Support System for Agro-technology Transfer cropping system model (DSSAT-CSM) was used to simulate yield, water and nutrient use for the study periods. Results indicate that rainfed maize yield might decrease on average by 16.5 and 23% by the 2050s and 2080s, respectively, due to climate change. Water productivity is expected to decline on average by 2.2 and 12% in the CRV by mid and end centuries with respect to the baseline. Nutrient uptake and corresponding nutrient use efficiency (NUE) might also be negatively affected by climate change. Phosphorus uptake probably will decrease in the CRV on average by 14.5 to 18% by 2050s, while N uptake may not change significantly at Melkassa. Nitrogen and P use efficiency indicators showed decreases in the range between 8.5 to 10.5% and between 9.3 to 10.5%, respectively, by 2050s relative to the baseline average. The simulation results further indicated that a combination of increased water availability and optimum nutrient application might increase both water productivity and nutrient use efficiency in the changed climate, which can ensure modest production in the future. Potential options that can improve water availability and nutrient uptake should be identified for the study locations using a crop modeling approach.

Keywords: crop model, climate change scenario, nutrient uptake, nutrient use efficiency, water productivity

Procedia PDF Downloads 84
703 Assessment of Students Skills in Error Detection in SQL Classes using Rubric Framework - An Empirical Study

Authors: Dirson Santos De Campos, Deller James Ferreira, Anderson Cavalcante Gonçalves, Uyara Ferreira Silva

Abstract:

Rubrics to learning research provide many evaluation criteria and expected performance standards linked to defined student activity for learning and pedagogical objectives. Despite the rubric being used in education at all levels, academic literature on rubrics as a tool to support research in SQL Education is quite rare. There is a large class of SQL queries is syntactically correct, but certainly, not all are semantically correct. Detecting and correcting errors is a recurring problem in SQL education. In this paper, we usthe Rubric Abstract Framework (RAF), which consists of steps, that allows us to map the information to measure student performance guided by didactic objectives defined by the teacher as long as it is contextualized domain modeling by rubric. An empirical study was done that demonstrates how rubrics can mitigate student difficulties in finding logical errors and easing teacher workload in SQL education. Detecting and correcting logical errors is an important skill for students. Researchers have proposed several ways to improve SQL education because understanding this paradigm skills are crucial in software engineering and computer science. The RAF instantiation was using in an empirical study developed during the COVID-19 pandemic in database course. The pandemic transformed face-to-face and remote education, without presential classes. The lab activities were conducted remotely, which hinders the teaching-learning process, in particular for this research, in verifying the evidence or statements of knowledge, skills, and abilities (KSAs) of students. Various research in academia and industry involved databases. The innovation proposed in this paper is the approach used where the results obtained when using rubrics to map logical errors in query formulation have been analyzed with gains obtained by students empirically verified. The research approach can be used in the post-pandemic period in both classroom and distance learning.

Keywords: rubric, logical error, structured query language (SQL), empirical study, SQL education

Procedia PDF Downloads 188
702 Study of Properties of Concretes Made of Local Building Materials and Containing Admixtures, and Their Further Introduction in Construction Operations and Road Building

Authors: Iuri Salukvadze

Abstract:

Development of Georgian Economy largely depends on its effective use of its transit country potential. The value of Georgia as the part of Europe-Asia corridor has increased; this increases the interest of western and eastern countries to Georgia as to the country that laid on the transit axes that implies transit infrastructure creation and development in Georgia. It is important to use compacted concrete with the additive in modern road construction industry. Even in the 21-century, concrete remains as the main vital constructive building material, therefore innovative, economic and environmentally protected technologies are needed. Georgian construction market requires the use of concrete of new generation, adaptation of nanotechnologies to the local realities that will give the ability to create multifunctional, nano-technological high effective materials. It is highly important to research their physical and mechanical states. The study of compacted concrete with the additives is necessary to use in the road construction in the future and to increase hardness of roads in Georgia. The aim of the research is to study the physical-mechanical properties of the compacted concrete with the additives based on the local materials. Any experimental study needs large number of experiments from one side in order to achieve high accuracy and optimal number of the experiments with minimal charges and in the shortest period of time from the other side. To solve this problem in practice, it is possible to use experiments planning static and mathematical methods. For the materials properties research we will use distribution hypothesis, measurements results by normal law according to which divergence of the obtained results is caused by the error of method and inhomogeneity of the object. As the result of the study, we will get resistible compacted concrete with additives for the motor roads that will improve roads infrastructure and give us saving rate while construction of the roads and their exploitation.

Keywords: construction, seismic protection systems, soil, motor roads, concrete

Procedia PDF Downloads 243
701 Epigenetic Modifying Potential of Dietary Spices: Link to Cure Complex Diseases

Authors: Jeena Gupta

Abstract:

In the today’s world of pharmaceutical products, one should not forget the healing properties of inexpensive food materials especially spices. They are known to possess hidden pharmaceutical ingredients, imparting them the qualities of being anti-microbial, anti-oxidant, anti-inflammatory and anti-carcinogenic. Further aberrant epigenetic regulatory mechanisms like DNA methylation, histone modifications or altered microRNA expression patterns, which regulates gene expression without changing DNA sequence, contribute significantly in the development of various diseases. Changing lifestyles and diets exert their effect by influencing these epigenetic mechanisms which are thus the target of dietary phytochemicals. Bioactive components of plants have been in use since ages but their potential to reverse epigenetic alterations and prevention against diseases is yet to be explored. Spices being rich repositories of many bioactive constituents are responsible for providing them unique aroma and taste. Some spices like curcuma and garlic have been well evaluated for their epigenetic regulatory potential, but for others, it is largely unknown. We have evaluated the biological activity of phyto-active components of Fennel, Cardamom and Fenugreek by in silico molecular modeling, in vitro and in vivo studies. Ligand-based similarity studies were conducted to identify structurally similar compounds to understand their biological phenomenon. The database searching has been done by using Fenchone from fennel, Sabinene from cardamom and protodioscin from fenugreek as a query molecule in the different small molecule databases. Moreover, the results of the database searching exhibited that these compounds are having potential binding with the different targets found in the Protein Data Bank. Further in addition to being epigenetic modifiers, in vitro study had demonstrated the antimicrobial, antifungal, antioxidant and cytotoxicity protective effects of Fenchone, Sabinene and Protodioscin. To best of our knowledge, such type of studies facilitate the target fishing as well as making the roadmap in drug design and discovery process for identification of novel therapeutics.

Keywords: epigenetics, spices, phytochemicals, fenchone

Procedia PDF Downloads 156
700 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 153
699 Indirect Intergranular Slip Transfer Modeling Through Continuum Dislocation Dynamics

Authors: A. Kalaei, A. H. W. Ngan

Abstract:

In this study, a mesoscopic continuum dislocation dynamics (CDD) approach is applied to simulate the intergranular slip transfer. The CDD scheme applies an efficient kinematics equation to model the evolution of the “all-dislocation density,” which is the line-length of dislocations of each character per unit volume. As the consideration of every dislocation line can be a limiter for the simulation of slip transfer in large scales with a large quantity of participating dislocations, a coarse-grained, extensive description of dislocations in terms of their density is utilized to resolve the effect of collective motion of dislocation lines. For dynamics closure, namely, to obtain the dislocation velocity from a velocity law involving the effective glide stress, mutual elastic interaction of dislocations is calculated using Mura’s equation after singularity removal at the core of dislocation lines. The developed scheme for slip transfer can therefore resolve the effects of the elastic interaction and pile-up of dislocations, which are important physics omitted in coarser models like crystal plasticity finite element methods (CPFEMs). Also, the length and timescales of the simulationareconsiderably larger than those in molecular dynamics (MD) and discrete dislocation dynamics (DDD) models. The present work successfully simulates that, as dislocation density piles up in front of a grain boundary, the elastic stress on the other side increases, leading to dislocation nucleation and stress relaxation when the local glide stress exceeds the operation stress of dislocation sources seeded on the other side of the grain boundary. More importantly, the simulation verifiesa phenomenological misorientation factor often used by experimentalists, namely, the ease of slip transfer increases with the product of the cosines of misorientation angles of slip-plane normals and slip directions on either side of the grain boundary. Furthermore, to investigate the effects of the critical stress-intensity factor of the grain boundary, dislocation density sources are seeded at different distances from the grain boundary, and the critical applied stress to make slip transfer happen is studied.

Keywords: grain boundary, dislocation dynamics, slip transfer, elastic stress

Procedia PDF Downloads 122