Search results for: iterative methods
14162 Eco-Index for Assessing Ecological Disturbances at Downstream of a Hydropower Project
Authors: Chandra Upadhyaya, Arup Kumar Sarma
Abstract:
In the North Eastern part of India several hydro power projects are being proposed and execution for some of them are already initiated. There are controversies surrounding these constructions. Impact of these dams in the downstream part of the rivers needs to be assessed so that eco-system and people living downstream are protected by redesigning the projects if it becomes necessary. This may result in reducing the stresses to the affected ecosystem and people living downstream. At present many index based ecological methods are present to assess impact on ecology. However, none of these methods are capable of assessing the affect resulting from dam induced diurnal variation of flow in the downstream. We need environmental flow methodology based on hydrological index which can address the affect resulting from dam induced diurnal variation of flow and play an important role in a riverine ecosystem management and be able to provide a qualitative idea about changes in the habitat for aquatic and riparian species.Keywords: ecosystem, environmental flow assessment, entropy, IHA, TNC
Procedia PDF Downloads 38414161 Contraceptives: Experiences of Agency and Coercion of Young People Living in Colombia
Authors: Paola Montenegro, Maria de los Angeles Balaguera Villa
Abstract:
Contraceptive methods play a fundamental role in preventing unwanted pregnancies and protecting users from sexually transmitted infections (STIs). Despite being known to almost the entire population of reproductive age living in Colombia, there are barriers, practices and complex notions about contraceptives that affect their desired mass use and effectiveness. This work aims to analyse some of the perceptions and practices discussed with young people (13-28 years old) living in Colombia regarding the use of contraceptives in their daily lives, preferences, needs and perceived side effects. This research also examines the perceived paradox in autonomy that young people experience regarding contraceptive use: in one hand, its use (or lack of it) is interpreted as an act of self-determination and primary example of reproductive agency, on the other hand, it was frequently associated with coercion and limited autonomy derived from the gaps in reliable information available for young people, the difficulty of accessing certain preferred methods, and sometimes the experienced coercion exercise by doctors, partners and/or family members. The data and analysis discussed in this work stems from a research project whose objective was to provide information about needs and preferences in sexual and reproductive health of young people living in Colombia in relation to a possible telehealth service that could close the gap in access to quality care and safe information. Through a mixed methods approach, this study collected 5.736 responses to a virtual survey disseminated nationwide in Colombia and 47 inperson interviews (24 of them with people who were assigned female at birth and 21 with local key stakeholders in the abortion ecosystem). Quantitative data was analyzed using Stata SE Version 16.0 and qualitative analysis was completed through NVivo using thematic analysis. Key findings on contraception use in young people living in Colombia reveal that 85,8% of participants had used a contraceptive method in the last two years, and that the most commonly used methods were condoms, contraceptive pills, the morning-after pill and the method of interruption. The remaining 14,2% of respondents who declared to not have used contraceptives in the last two years expressed that the main four barriers to access were: "Lack of knowledge about contraceptive methods and where to obtain information and/or access them (13.9%)", "Have had sex with people who have vaginas (10.2%)", "Cost of contraceptive method (8.4%)" and "Difficulties in obtaining medical authorisations (7.6%)". These barriers coincided with the ones used to explain the non-use of contraceptives in young people, which reveals that limitations in information, cost, and quality care represent structural issues that need to be address in programmes, services, and public policy. Finally, interviews showed that young people perceive contraceptive use and non-use as an example of reaffirming reproductive agency and limitations to this can be explained through the widespread incomplete knowledge about how methods work and the prevalence of other social representations of contraception associated with trust, fidelity, and partner preferences, that in the end create limitations to young people’s autonomy.Keywords: contraception, family planning, premarital fertility, unplanned pregnancy
Procedia PDF Downloads 7614160 Quantifying Processes of Relating Skills in Learning: The Map of Dialogical Inquiry
Authors: Eunice Gan Ghee Wu, Marcus Goh Tian Xi, Alicia Chua Si Wen, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual basis of learning processes. According to the Map, dialogical inquiry motivates complex thinking, dialogue, reflection, and learner agency. For instance, classrooms that incorporated dialogical inquiry enabled learners to construct more meaning in their learning, to engage in self-reflection, and to challenge their ideas with different perspectives. While the Map contributes to the psychology of learning, its qualitative approach makes it hard to track and compare learning processes over time for both teachers and learners. Qualitative approach typically relies on open-ended responses, which can be time-consuming and resource-intensive. With these concerns, the present research aimed to develop and validate a quantifiable measure for the Map. Specifically, the Map of Dialogical Inquiry reflects the eight different learning processes and perspectives employed during a learner’s experience. With a focus on interpersonal and emotional learning processes, the purpose of the present study is to construct and validate a scale to measure the “Relating” aspect of learning. According to the Map, the Relating aspect of learning contains four conceptual components: using intuition and empathy, seeking personal meaning, building relationships and meaning with others, and likes stories and metaphors. All components have been shown to benefit learning in past research. This research began with a literature review with the goal of identifying relevant scales in the literature. These scales were used as a basis for item development, guided by the four conceptual dimensions in the “Relating” aspect of learning, resulting in a pool of 47 preliminary items. Then, all items were administered to 200 American participants via an online survey along with other scales of learning. Dimensionality, reliability, and validity of the “Relating” scale was assessed. Data were submitted to a confirmatory factor analysis (CFA), revealing four distinct components and items. Items with lower factor loadings were removed in an iterative manner, resulting in 34 items in the final scale. CFA also revealed that the “Relating” scale was a four-factor model, following its four distinct components as described in the Map of Dialogical Inquiry. In sum, this research was able to develop a quantitative scale for the “Relating” aspect of the Map of Dialogical Inquiry. By representing learning as numbers, users, such as educators and learners, can better track, evaluate, and compare learning processes over time in an efficient manner. More broadly, this scale may also be used as a learning tool in lifelong learning.Keywords: lifelong learning, scale development, dialogical inquiry, relating, social and emotional learning, socio-affective intuition, empathy, narrative identity, perspective taking, self-disclosure
Procedia PDF Downloads 14214159 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 4714158 Virtual and Augmented Reality Based Heritage Gamification: Basilica of Smyrna in Turkey
Authors: Tugba Saricaoglu
Abstract:
This study argues about the potential representation and interpretation of Basilica of Smyrna through gamification. Representation can be defined as a key which plays a role as a converter in order to provide interpretation of something according to the person who perceives. Representation of cultural heritage is a hypothetical and factual approach in terms of its sustainable conservation. Today, both site interpreters and public of cultural heritage have varying perspectives due to their different demographic, social, and even cultural backgrounds. Additionally, gamification application offers diversion of methods suchlike video games to improve user perspective of non-game platforms, contexts, and issues. Hence, cultural heritage and video game decided to be analyzed. Moreover, there are basically different ways of representation of cultural heritage such as digital, physical, and virtual methods in terms of conservation. Virtual reality (VR) and augmented reality (AR) technologies are two of the contemporary digital methods of heritage conservation. In this study, 3D documented ruins of the Basilica will be presented in the virtual and augmented reality based technology as a theoretical gamification sample. Also, this paper will focus on two sub-topics: First, evaluation of the video-game platforms applied to cultural heritage sites, and second, potentials of cultural heritage to be represented in video game platforms. The former will cover the analysis of some case(s) with regard to the concepts and representational aspects of cultural heritage. The latter will include the investigation of cultural heritage sites which carry such a potential and their sustainable conversation. Consequently, after mutual collection of information from cultural heritage and video game platforms, a perspective will be provided in terms of interpretation of representation of cultural heritage by sampling that on Basilica of Smyrna by using VR and AR based technologies.Keywords: Basilica of Smyrna, cultural heritage, digital heritage, gamification
Procedia PDF Downloads 46614157 A Survey of Feature Selection and Feature Extraction Techniques in Machine Learning
Authors: Samina Khalid, Shamila Nasreen
Abstract:
Dimensionality reduction as a preprocessing step to machine learning is effective in removing irrelevant and redundant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection and feature extraction methods with respect to efficiency and effectiveness. In the field of machine learning and pattern recognition, dimensionality reduction is important area, where many approaches have been proposed. In this paper, some widely used feature selection and feature extraction techniques have analyzed with the purpose of how effectively these techniques can be used to achieve high performance of learning algorithms that ultimately improves predictive accuracy of classifier. An endeavor to analyze dimensionality reduction techniques briefly with the purpose to investigate strengths and weaknesses of some widely used dimensionality reduction methods is presented.Keywords: age related macular degeneration, feature selection feature subset selection feature extraction/transformation, FSA’s, relief, correlation based method, PCA, ICA
Procedia PDF Downloads 49614156 Radiochemical Purity of 68Ga-BCA-Peptides: Separation of All 68Ga Species with a Single iTLC Strip
Authors: Anton A. Larenkov, Alesya Ya Maruk
Abstract:
In the present study, highly effective iTLC single strip method for the determination of radiochemical purity (RCP) of 68Ga-BCA-peptides was developed (with no double-developing, changing of eluents or other additional manipulation). In this method iTLC-SG strips and commonly used eluent TFAaq. (3-5 % (v/v)) are used. The method allows determining each of the key radiochemical forms of 68Ga (colloidal, bound, ionic) separately with the peaks separation being no less than 4 σ. Rf = 0.0-0.1 for 68Ga-colloid; Rf = 0.5-0.6 for 68Ga-BCA-peptides; Rf = 0.9-1.0 for ionic 68Ga. The method is simple and fast: For developing length of 75 mm only 4-6 min is required (versus 18-20 min for pharmacopoeial method). The method has been tested on various compounds (including 68Ga-DOTA-TOC, 68Ga-DOTA-TATE, 68Ga-NODAGA-RGD2 etc.). The cross-validation work for every specific form of 68Ga showed good correlation between method developed and control (pharmacopoeial) methods. The method can become convenient and much more informative replacement for pharmacopoeial methods, including HPLC.Keywords: DOTA-TATE, 68Ga, quality control, radiochemical purity, radiopharmaceuticals, TLC
Procedia PDF Downloads 29014155 A Study on Selection Issues of an Integrated Service Provider Using Analytical Hierarchy Process
Authors: M. Pramila Devi, J. Praveena
Abstract:
In today’s industrial scenario, the expectations and demand of customers are reaching great heights. In order to satisfy the customer requirements the users are increasingly turning towards fourth party logistics (4PL) service providers to manage their total supply chain operations. In this present research, initially, the criteria for the selection of integrated service providers have been identified and an integrated modal based on their inter-relationship has been developed with help of shippers, with this idea of what factors to be considered and their inter-relationships while selecting integrated service provider. Later, various methods deriving the priority weights viz. Analytical Hierarchy Process (AHP) have been employed for 4PL service provider selection. The derived priorities of 4PL alternatives using methods have been critically analyzed and compared for effective selection. The use of the modal indicates that the computed quantitative evaluation can be applied to improve the precision of the selection.Keywords: analytical hierarchy process, fourth party logistics, priority weight, criteria selection
Procedia PDF Downloads 43214154 Determining Full Stage Creep Properties from Miniature Specimen Creep Test
Authors: W. Sun, W. Wen, J. Lu, A. A. Becker
Abstract:
In this work, methods for determining creep properties which can be used to represent the full life until failure from miniature specimen creep tests based on analytical solutions are presented. Examples used to demonstrate the application of the methods include a miniature rectangular thin beam specimen creep test under three-point bending and a miniature two-material tensile specimen creep test subjected to a steady load. Mathematical expressions for deflection and creep strain rate of the two specimens were presented for the Kachanov-Rabotnov creep damage model. On this basis, an inverse procedure was developed which has potential applications for deriving the full life creep damage constitutive properties from a very small volume of material, in particular, for various microstructure constitutive regions, e.g. within heat-affected zones of power plant pipe weldments. Further work on validation and improvement of the method is addressed.Keywords: creep damage property, miniature specimen, inverse approach, finite element modeling
Procedia PDF Downloads 23114153 Slosh Investigations on a Spacecraft Propellant Tank for Control Stability Studies
Authors: Sarath Chandran Nair S, Srinivas Kodati, Vasudevan R, Asraff A. K
Abstract:
Spacecrafts generally employ liquid propulsion for their attitude and orbital maneuvers or raising it from geo-transfer orbit to geosynchronous orbit. Liquid propulsion systems use either mono-propellant or bi-propellants for generating thrust. These propellants are generally stored in either spherical tanks or cylindrical tanks with spherical end domes. The propellant tanks are provided with a propellant acquisition system/propellant management device along with vanes and their conical mounting structure to ensure propellant availability in the outlet for thrust generation even under a low/zero-gravity environment. Slosh is the free surface oscillations in partially filled containers under external disturbances. In a spacecraft, these can be due to control forces and due to varying acceleration. Knowledge of slosh and its effect due to internals is essential for understanding its stability through control stability studies. It is mathematically represented by a pendulum-mass model. It requires parameters such as slosh frequency, damping, sloshes mass and its location, etc. This paper enumerates various numerical and experimental methods used for evaluating the slosh parameters required for representing slosh. Numerical methods like finite element methods based on linear velocity potential theory and computational fluid dynamics based on Reynolds Averaged Navier Stokes equations are used for the detailed evaluation of slosh behavior in one of the spacecraft propellant tanks used in an Indian space mission. Experimental studies carried out on a scaled-down model are also discussed. Slosh parameters evaluated by different methods matched very well and finalized their dispersion bands based on experimental studies. It is observed that the presence of internals such as propellant management devices, including conical support structure, alters slosh parameters. These internals also offers one order higher damping compared to viscous/ smooth wall damping. It is an advantage factor for the stability of slosh. These slosh parameters are given for establishing slosh margins through control stability studies and finalize the spacecraft control system design.Keywords: control stability, propellant tanks, slosh, spacecraft, slosh spacecraft
Procedia PDF Downloads 24514152 Exploring Multi-Feature Based Action Recognition Using Multi-Dimensional Dynamic Time Warping
Authors: Guoliang Lu, Changhou Lu, Xueyong Li
Abstract:
In action recognition, previous studies have demonstrated the effectiveness of using multiple features to improve the recognition performance. We focus on two practical issues: i) most studies use a direct way of concatenating/accumulating multi features to evaluate the similarity between two actions. This way could be too strong since each kind of feature can include different dimensions, quantities, etc; ii) in many studies, the employed classification methods lack of a flexible and effective mechanism to add new feature(s) into classification. In this paper, we explore an unified scheme based on recently-proposed multi-dimensional dynamic time warping (MD-DTW). Experiments demonstrated the scheme's effectiveness of combining multi-feature and the flexibility of adding new feature(s) to increase the recognition performance. In addition, the explored scheme also provides us an open architecture for using new advanced classification methods in the future to enhance action recognition.Keywords: action recognition, multi features, dynamic time warping, feature combination
Procedia PDF Downloads 43714151 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process
Authors: Venkata Ramanaiah, Shabina Khanam
Abstract:
Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.Keywords: coal consumption, energy conservation, process integration, sponge iron plant
Procedia PDF Downloads 14414150 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 23114149 Best Option for Countercyclical Capital Buffer Implementation: Scenarios for Baltic States
Authors: Ģirts Brasliņš, Ilja Arefjevs, Nadežda Tarakanova
Abstract:
The objective of countercyclical capital buffer is to encourage banks to build up buffers in good times that can be drawn down in bad times. The aim of the report is to assess such decisions by banks derived from three approaches. The approaches are the aggregate credit-to-GDP ratio, credit growth as well as banking sector profits. The approaches are implemented for Estonia, Latvia and Lithuania for the time period 2000-2012. The report compares three approaches and analyses their relevance to the Baltic states by testing the correlation between a growth in studied variables and a growth of corresponding gaps. Methods used in the empirical part of the report are econometric analysis as well as economic analysis, development indicators, relative and absolute indicators and other methods. The research outcome is a cross-Baltic comparison of two alternative approaches to establish or release a countercyclical capital buffer by banks and their implications for each Baltic country.Keywords: basel III, countercyclical capital buffer, banks, credit growth, baltic states
Procedia PDF Downloads 39614148 A Guide for Using Viscoelasticity in ANSYS
Authors: A. Fettahoglu
Abstract:
Theory of viscoelasticity is used by many researchers to represent the behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell model and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Afterwards, a guide is illustrated to ease using of viscoelasticity tool in ANSYS.Keywords: ANSYS, generalized Maxwell model, finite element method, Prony series, viscoelasticity, viscoelastic material curve fitting
Procedia PDF Downloads 60314147 Research and Application of the Three-Dimensional Visualization Geological Modeling of Mine
Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao
Abstract:
Today's mining industry is advancing gradually toward digital and visual direction. The three dimensional visualization geological modeling of mine is the digital characterization of mineral deposit, and is one of the key technology of digital mine. The three-dimensional geological modeling is a technology that combines the geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in three-dimensional environment with computer technology, and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provided scientific bases for mine resource assessment, reserve calculation, mining design and so on.Keywords: three-dimensional geological modeling, geological database, geostatistics, block model
Procedia PDF Downloads 7014146 Introduction of Microbial Symbiosis in Genus of Tridacna and Kiwaidae with Insights into Aquaculture
Authors: Jincao Guo
Abstract:
Aquaculture plays a significant role in the diet of people in many regions. However, problems such as bioaccumulation have risen with the rapidly growing industry due to a lack of control in the feeding process, which brings uncertainty to the quality of the products. The paper tackles the problem by introducing the symbiosis of the Giant Clam (Tridacna) with photosynthetic algae and Yeti Crab (Kiwaidae) with chemosynthetic bacteria in molecular and developmental details. By combing the knowledge gained from the two models and past studies, innovative ideas such as using mass selection methods to domesticate and farm those symbiotic species, as well as improvements for the current farming methods, such as introducing algae feeding, are discussed. Further studies are needed, but experiments are worth conducting since it increases the variety of choices for consumers and can potentially improve the quality and efficiency of aquaculture.Keywords: the giant clam Tridacna, yeti crab Kiwaidae, autotroph microbes, microbial symbiosis, aquaculture, bivalves, crustaceans, mollusk, photosynthesis, chemosynthesis
Procedia PDF Downloads 7514145 Applications of AI, Machine Learning, and Deep Learning in Cyber Security
Authors: Hailyie Tekleselase
Abstract:
Deep learning is increasingly used as a building block of security systems. However, neural networks are hard to interpret and typically solid to the practitioner. This paper presents a detail survey of computing methods in cyber security, and analyzes the prospects of enhancing the cyber security capabilities by suggests that of accelerating the intelligence of the security systems. There are many AI-based applications used in industrial scenarios such as Internet of Things (IoT), smart grids, and edge computing. Machine learning technologies require a training process which introduces the protection problems in the training data and algorithms. We present machine learning techniques currently applied to the detection of intrusion, malware, and spam. Our conclusions are based on an extensive review of the literature as well as on experiments performed on real enterprise systems and network traffic. We conclude that problems can be solved successfully only when methods of artificial intelligence are being used besides human experts or operators.Keywords: artificial intelligence, machine learning, deep learning, cyber security, big data
Procedia PDF Downloads 12614144 Aerodynamic Design an UAV with Application on the Spraying Agricola with Method of Genetic Algorithm Optimization
Authors: Saul A. Torres Z., Eduardo Liceaga C., Alfredo Arias M.
Abstract:
Agriculture in the world falls within the main sources of economic and global needs, so care of crop is extremely important for owners and workers; one of the major causes of loss of product is the pest infection of different types of organisms. We seek to develop a UAV for agricultural spraying at a maximum altitude of 5000 meters above sea level, with a payload of 100 liters of fumigant. For the developing the aerodynamic design of the aircraft is using computational tools such as the "Vortex Lattice Athena" software, "MATLAB"," ANSYS FLUENT"," XFoil " package among others. Also methods are being used structured programming, exhaustive analysis of optimization methods and search. The results have a very low margin of error, and the multi- objective problems can be helpful for future developments. The program has 10 functions developed in MATLAB, these functions are related to each other to enable the development of design, and all these functions are controlled by the principal code "Master.m".Keywords: aerodynamics design, optimization, algorithm genetic, multi-objective problem, stability, vortex
Procedia PDF Downloads 53214143 Evaluation of Different Liquid Scintillation Counting Methods for 222Rn Determination in Waters
Authors: Jovana Nikolov, Natasa Todorovic, Ivana Stojkovic
Abstract:
Monitoring of 222Rn in drinking or surface waters, as well as in groundwater has been performed in connection with geological, hydrogeological and hydrological surveys and health hazard studies. Liquid scintillation counting (LSC) is often preferred analytical method for 222Rn measurements in waters because it allows multiple-sample automatic analysis. LSC method implies mixing of water samples with organic scintillation cocktail, which triggers radon diffusion from the aqueous into organic phase for which it has a much greater affinity, eliminating possibility of radon emanation in that manner. Two direct LSC methods that assume different sample composition have been presented, optimized and evaluated in this study. One-phase method assumed direct mixing of 10 ml sample with 10 ml of emulsifying cocktail (Ultima Gold AB scintillation cocktail is used). Two-phase method involved usage of water-immiscible cocktails (in this study High Efficiency Mineral Oil Scintillator, Opti-Fluor O and Ultima Gold F are used). Calibration samples were prepared with aqueous 226Ra standard in glass 20 ml vials and counted on ultra-low background spectrometer Quantulus 1220TM equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with 226Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide reliable and precise spectra separation. Consequentially, calibration procedure was done through investigation of PSA discriminator level influence on 222Rn efficiency detection, using 226Ra calibration standard in wide range of activity concentrations. Evaluation of presented methods was based on obtained efficiency detections and achieved Minimal Detectable Activity (MDA). Comparison of presented methods, accuracy and precision as well as different scintillation cocktail’s performance was considered from results of measurements of 226Ra spiked water samples with known activity and environmental samples.Keywords: 222Rn in water, Quantulus1220TM, scintillation cocktail, PSA parameter
Procedia PDF Downloads 20114142 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge
Authors: K. Eryilmaz, G. Mercanoglu
Abstract:
Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis
Procedia PDF Downloads 16114141 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition
Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed
Abstract:
Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil
Procedia PDF Downloads 33714140 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines
Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl
Abstract:
Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.Keywords: dynamic behavior, lightweight, machine tool, pose-dependency
Procedia PDF Downloads 45914139 Development of a Geomechanical Risk Assessment Model for Underground Openings
Authors: Ali Mortazavi
Abstract:
The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering
Procedia PDF Downloads 14514138 A Stepwise Approach to Automate the Search for Optimal Parameters in Seasonal ARIMA Models
Authors: Manisha Mukherjee, Diptarka Saha
Abstract:
Reliable forecasts of univariate time series data are often necessary for several contexts. ARIMA models are quite popular among practitioners in this regard. Hence, choosing correct parameter values for ARIMA is a challenging yet imperative task. Thus, a stepwise algorithm is introduced to provide automatic and robust estimates for parameters (p; d; q)(P; D; Q) used in seasonal ARIMA models. This process is focused on improvising the overall quality of the estimates, and it alleviates the problems induced due to the unidimensional nature of the methods that are currently used such as auto.arima. The fast and automated search of parameter space also ensures reliable estimates of the parameters that possess several desirable qualities, consequently, resulting in higher test accuracy especially in the cases of noisy data. After vigorous testing on real as well as simulated data, the algorithm doesn’t only perform better than current state-of-the-art methods, it also completely obviates the need for human intervention due to its automated nature.Keywords: time series, ARIMA, auto.arima, ARIMA parameters, forecast, R function
Procedia PDF Downloads 16514137 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano
Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das
Abstract:
Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption
Procedia PDF Downloads 41514136 Integrating RAG with Prompt Engineering for Dynamic Log Parsing and Anomaly Detections
Authors: Liu Lin Xin
Abstract:
With the increasing complexity of systems, log parsing and anomaly detection have become crucial for maintaining system stability. However, traditional methods often struggle with adaptability and accuracy, especially when dealing with rapidly evolving log content and unfamiliar domains. To address these challenges, this paper proposes approach that integrates Retrieval Augmented Generation (RAG) technology with Prompt Engineering for Large Language Models, applied specifically in LogPrompt. This approach enables dynamic log parsing and intelligent anomaly detection by combining real-time information retrieval with prompt optimization. The proposed method significantly enhances the adaptability of log analysis and improves the interpretability of results. Experimental results on several public datasets demonstrate the method's superior performance, particularly in scenarios lacking training data, where it significantly outperforms traditional methods. This paper introduces a novel technical pathway for log parsing and anomaly detection, showcasing the substantial theoretical value and practical potential.Keywords: log parsing, anomaly detection, RAG, prompt engineering, LLMs
Procedia PDF Downloads 3314135 Improved Pitch Detection Using Fourier Approximation Method
Authors: Balachandra Kumaraswamy, P. G. Poonacha
Abstract:
Automatic Music Information Retrieval has been one of the challenging topics of research for a few decades now with several interesting approaches reported in the literature. In this paper we have developed a pitch extraction method based on a finite Fourier series approximation to the given window of samples. We then estimate pitch as the fundamental period of the finite Fourier series approximation to the given window of samples. This method uses analysis of the strength of harmonics present in the signal to reduce octave as well as harmonic errors. The performance of our method is compared with three best known methods for pitch extraction, namely, Yin, Windowed Special Normalization of the Auto-Correlation Function and Harmonic Product Spectrum methods of pitch extraction. Our study with artificially created signals as well as music files show that Fourier Approximation method gives much better estimate of pitch with less octave and harmonic errors.Keywords: pitch, fourier series, yin, normalization of the auto- correlation function, harmonic product, mean square error
Procedia PDF Downloads 41214134 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods
Authors: Cristina Vatamanu, Doina Cosovan, Dragos Gavrilut, Henri Luchian
Abstract:
In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through semi-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.Keywords: ensembles, false positives, feature selection, one side class algorithm
Procedia PDF Downloads 29214133 The Sustainable Development for Coastal Tourist Building
Authors: D. Avila
Abstract:
The tourism industry is a phenomenon that has become a growing presence in international socio-economic dynamics, which in most cases exceeds the control parameters in the various environmental regulations and sustainability of existing resources. Because of this, the effects on the natural environment at the regional and national levels represent a challenge, for which a number of strategies are necessary to minimize the environmental impact generated by the occupation of the territory. The hotel tourist building and sustainable development in the coastal zone, have an important impact on the environment and on the physical and psychological health of the inhabitants. Environmental quality associated with the comfort of humans to the sustainable development of natural resources; applied to the hotel architecture this concept involves the incorporation of new demands on all of the constructive process of a building, changing customs of developers and users. The methodology developed provides an initial analysis to determine and rank the different tourist buildings, with the above it will be feasible to establish methods of study and environmental impact assessment. Finally, it is necessary to establish an overview regarding the best way to implement tourism development on the coast, containing guidelines to improve and protect the natural environment. This paper analyzes the parameters and strategies to reduce environmental impacts derived from deployments tourism on the coast, through a series of recommendations towards sustainability, in the context of the Bahia de Banderas, Puerto Vallarta, Jalisco. The environmental impact caused by the implementation of tourism development, perceived in a coastal environment, forcing a series of processes, ranging from the identification of impacts, prediction and evaluation of them. For this purpose are described below, different techniques and valuation procedures: Identification of impacts. Methods for the identification of damage caused to the environment pursue general purpose to obtain a group of negative indicators that are subsequently used in the study of environmental impact. There are several systematic methods to identify the impacts caused by human activities. In the present work, develops a procedure based and adapted from the Ministry of works public urban reference in studies of environmental impacts, the representative methods are: list of contrast, arrays, and networks, method of transparencies and superposition of maps.Keywords: environmental impact, physical health, sustainability, tourist building
Procedia PDF Downloads 329