Search results for: NDT methods
13968 Drone On-Time Obstacle Avoidance for Static and Dynamic Obstacles
Authors: Herath M. P. C. Jayaweera, Samer Hanoun
Abstract:
Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GME′s velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use, and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks, including their tendency to generate longer routes when the obstacles are sideways of the drone′s route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilized on most types of drones that have basic distance measurement sensors and autopilot-supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS-supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.Keywords: drones, force field methods, obstacle avoidance, path planning
Procedia PDF Downloads 9313967 Software Engineering Inspired Cost Estimation for Process Modelling
Authors: Felix Baumann, Aleksandar Milutinovic, Dieter Roller
Abstract:
Up to this point business process management projects in general and business process modelling projects in particular could not rely on a practical and scientifically validated method to estimate cost and effort. Especially the model development phase is not covered by a cost estimation method or model. Further phases of business process modelling starting with implementation are covered by initial solutions which are discussed in the literature. This article proposes a method of filling this gap by deriving a cost estimation method from available methods in similar domains namely software development or software engineering. Software development is regarded as closely similar to process modelling as we show. After the proposition of this method different ideas for further analysis and validation of the method are proposed. We derive this method from COCOMO II and Function Point which are established methods of effort estimation in the domain of software development. For this we lay out similarities of the software development rocess and the process of process modelling which is a phase of the Business Process Management life-cycle.Keywords: COCOMO II, busines process modeling, cost estimation method, BPM COCOMO
Procedia PDF Downloads 44013966 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates
Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery
Abstract:
Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop
Procedia PDF Downloads 9513965 The Training Demands of Nursing Assistants on Urinary Incontinence in Nursing Homes: A Mixed Methods Study
Authors: Lulu Liao, Huijing Chen, Yinan Zhao, Hongting Ning, Hui Feng
Abstract:
Urinary tract infection rate is an important index of care quality in nursing homes. The aim of the study is to understand the nursing assistant's current knowledge and attitudes of urinary incontinence and to explore related stakeholders' viewpoint about urinary incontinence training. This explanatory sequential study used Knowledge, Practice, and Attitude Model (KAP) and Adult Learning Theories, as the conceptual framework. The researchers collected data from 509 nursing assistants in sixteen nursing homes in Hunan province in China. The questionnaire survey was to assess the knowledge and attitude of urinary incontinence of nursing assistants. On the basis of quantitative research and combined with focus group, training demands were identified, which nurse managers should adopt to improve nursing assistants’ professional practice ability in urinary incontinence. Most nursing assistants held the poor knowledge (14.0 ± 4.18) but had positive attitudes (35.5 ± 3.19) toward urinary incontinence. There was a significant positive correlation between urinary incontinence knowledge and nursing assistants' year of work and educational level, urinary incontinence attitude, and education level (p < 0.001). Despite a general awareness of the importance of prevention of urinary tract infections, not all nurse managers fully valued the training in urinary incontinence compared with daily care training. And the nursing assistants required simple education resources to equip them with skills to address problem about urinary incontinence. The variety of learning methods also highlighted the need for educational materials, and nursing assistants had shown a strong interest in online learning. Related education material should be developed to meet the learning need of nurse assistants and provide suitable training method for planned quality improvement in urinary incontinence.Keywords: mixed methods, nursing assistants, nursing homes, urinary incontinence
Procedia PDF Downloads 13713964 Empirical Exploration for the Correlation between Class Object-Oriented Connectivity-Based Cohesion and Coupling
Authors: Jehad Al Dallal
Abstract:
Attributes and methods are the basic contents of an object-oriented class. The connectivity among these class members and the relationship between the class and other classes play an important role in determining the quality of an object-oriented system. Class cohesion evaluates the degree of relatedness of class attributes and methods, whereas class coupling refers to the degree to which a class is related to other classes. Researchers have proposed several class cohesion and class coupling measures. However, the correlation between class coupling and class cohesion measures have not been thoroughly studied. In this paper, using classes of three open-source Java systems, we empirically investigate the correlation between several measures of connectivity-based class cohesion and coupling. Four connectivity-based cohesion measures and eight coupling measures are considered in the empirical study. The empirical study results show that class connectivity-based cohesion and coupling internal quality attributes are inversely correlated. The strength of the correlation depends highly on the cohesion and coupling measurement approaches.Keywords: object-oriented class, software quality, class cohesion measure, class coupling measure
Procedia PDF Downloads 32213963 Alternate Methods to Visualize 2016 U.S. Presidential Election Result
Authors: Hong Beom Hur
Abstract:
Politics in America is polarized. The best illustration of this is the 2016 presidential election result map. States with megacities like California, New York, Illinois, Virginia, and others are marked blue to signify the color of the Democratic party. States located in inland and south like Texas, Florida, Tennesse, Kansas and others are marked red to signify the color of the Republican party. Such a stark difference between two colors, red and blue, combined with geolocations of each state with their borderline remarks one central message; America is divided into two colors between urban Democrats and rural Republicans. This paper seeks to defy the visualization by pointing out its limitations and search for alternative ways to visualize the 2016 election result. One such limitation is that geolocations of each state and state borderlines limit the visualization of population density. As a result, the election result map does not convey the fact that Clinton won the popular vote and only accentuates the voting patterns of urban and rural states. The paper seeks whether an alternative narrative can be observed by factoring in the population number into the size of each state and manipulating the state borderline according to the normalization. Yet another alternative narrative may be reached by factoring the size of each state by the number of the electoral college of each state by voting and visualize the number. Other alternatives will be discussed but are not implemented in visualization. Such methods include dividing the land of America into about 120 million cubes each representing a voter or by the number of whole population 300 million cubes. By exploring these alternative methods to visualize the politics of the 2016 election map, the public may be able to question whether it is possible to be free from the narrative of the divide-conquer when interpreting the election map and to look at both parties as a story of the United States of America.Keywords: 2016 U.S. presidential election, data visualization, population scale, geo-political
Procedia PDF Downloads 12113962 Radiation Usage Impact of on Anti-Nutritional Compounds (Antitrypsin and Phytic Acid) of Livestock and Poultry Foods
Authors: Mohammad Khosravi, Ali Kiani, Behroz Dastar, Parvin Showrang
Abstract:
Review was carried out on important anti-nutritional compounds of livestock and poultry foods and the effect of radiation usage. Nowadays, with advancement in technology, different methods have been considered for the optimum usage of nutrients in livestock and poultry foods. Steaming, extruding, pelleting, and the use of chemicals are the most common and popular methods in food processing. Use of radiation in food processing researches in the livestock and poultry industry is currently highly regarded. Ionizing (electrons, gamma) and non-ionizing beams (microwave and infrared) are the most useable rays in animal food processing. In recent researches, these beams have been used to remove and reduce the anti-nutritional factors and microbial contamination and improve the digestibility of nutrients in poultry and livestock food. The evidence presented will help researchers to recognize techniques of relevance to them. Simplification of some of these techniques, especially in developing countries, must be addressed so that they can be used more widely.Keywords: antitrypsin, gamma anti-nutritional components, phytic acid, radiation
Procedia PDF Downloads 34313961 Study of Magnetic Nanoparticles’ Endocytosis in a Single Cell Level
Authors: Jefunnie Matahum, Yu-Chi Kuo, Chao-Ming Su, Tzong-Rong Ger
Abstract:
Magnetic cell labeling is of great importance in various applications in biomedical fields such as cell separation and cell sorting. Since analytical methods for quantification of cell uptake of magnetic nanoparticles (MNPs) are already well established, image analysis on single cell level still needs more characterization. This study reports an alternative non-destructive quantification methods of single-cell uptake of positively charged MNPs. Magnetophoresis experiments were performed to calculate the number of MNPs in a single cell. Mobility of magnetic cells and the area of intracellular MNP stained by Prussian blue were quantified by image processing software. ICP-MS experiments were also performed to confirm the internalization of MNPs to cells. Initial results showed that the magnetic cells incubated at 100 µg and 50 µg MNPs/mL concentration move at 18.3 and 16.7 µm/sec, respectively. There is also an increasing trend in the number and area of intracellular MNP with increasing concentration. These results could be useful in assessing the nanoparticle uptake in a single cell level.Keywords: magnetic nanoparticles, single cell, magnetophoresis, image analysis
Procedia PDF Downloads 33313960 Electromagnetic Wave Propagation Equations in 2D by Finite Difference Method
Authors: N. Fusun Oyman Serteller
Abstract:
In this paper, the techniques to solve time dependent electromagnetic wave propagation equations based on the Finite Difference Method (FDM) are proposed by comparing the results with Finite Element Method (FEM) in 2D while discussing some special simulation examples. Here, 2D dynamical wave equations for lossy media, even with a constant source, are discussed for establishing symbolic manipulation of wave propagation problems. The main objective of this contribution is to introduce a comparative study of two suitable numerical methods and to show that both methods can be applied effectively and efficiently to all types of wave propagation problems, both linear and nonlinear cases, by using symbolic computation. However, the results show that the FDM is more appropriate for solving the nonlinear cases in the symbolic solution. Furthermore, some specific complex domain examples of the comparison of electromagnetic waves equations are considered. Calculations are performed through Mathematica software by making some useful contribution to the programme and leveraging symbolic evaluations of FEM and FDM.Keywords: finite difference method, finite element method, linear-nonlinear PDEs, symbolic computation, wave propagation equations
Procedia PDF Downloads 14713959 Dynamic Log Parsing and Intelligent Anomaly Detection Method Combining Retrieval Augmented Generation and Prompt Engineering
Authors: Liu Linxin
Abstract:
As system complexity increases, log parsing and anomaly detection become more and more important in ensuring system stability. However, traditional methods often face the problems of insufficient adaptability and decreasing accuracy when dealing with rapidly changing log contents and unknown domains. To this end, this paper proposes an approach LogRAG, which combines RAG (Retrieval Augmented Generation) technology with Prompt Engineering for Large Language Models, applied to log analysis tasks to achieve dynamic parsing of logs and intelligent anomaly detection. By combining real-time information retrieval and prompt optimisation, this study significantly improves the adaptive capability of log analysis and the interpretability of results. Experimental results show that the method performs well on several public datasets, especially in the absence of training data, and significantly outperforms traditional methods. This paper provides a technical path for log parsing and anomaly detection, demonstrating significant theoretical value and application potential.Keywords: log parsing, anomaly detection, retrieval-augmented generation, prompt engineering, LLMs
Procedia PDF Downloads 2913958 Anomaly Detection with ANN and SVM for Telemedicine Networks
Authors: Edward Guillén, Jeisson Sánchez, Carlos Omar Ramos
Abstract:
In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.Keywords: anomaly detection, back-propagation neural networks, network intrusion detection systems, support vector machines
Procedia PDF Downloads 35713957 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes
Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia
Abstract:
In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.Keywords: basic modal displacements, earthquake, optimization, spectrum
Procedia PDF Downloads 36113956 Experimental and Numerical Analyses of Tehran Research Reactor
Authors: A. Lashkari, H. Khalafi, H. Khazeminejad, S. Khakshourniya
Abstract:
In this paper, a numerical model is presented. The model is used to analyze a steady state thermo-hydraulic and reactivity insertion transient in TRR reference cores respectively. The model predictions are compared with the experiments and PARET code results. The model uses the piecewise constant and lumped parameter methods for the coupled point kinetics and thermal-hydraulics modules respectively. The advantages of the piecewise constant method are simplicity, efficiency and accuracy. A main criterion on the applicability range of this model is that the exit coolant temperature remains below the saturation temperature, i.e. no bulk boiling occurs in the core. The calculation values of power and coolant temperature, in steady state and positive reactivity insertion scenario, are in good agreement with the experiment values. However, the model is a useful tool for the transient analysis of most research reactor encountered in practice. The main objective of this work is using simple calculation methods and benchmarking them with experimental data. This model can be used for training proposes.Keywords: thermal-hydraulic, research reactor, reactivity insertion, numerical modeling
Procedia PDF Downloads 40113955 Optimal Design of Friction Dampers for Seismic Retrofit of a Moment Frame
Authors: Hyungoo Kang, Jinkoo Kim
Abstract:
This study investigated the determination of the optimal location and friction force of friction dampers to effectively reduce the seismic response of a reinforced concrete structure designed without considering seismic load. To this end, the genetic algorithm process was applied and the results were compared with those obtained by simplified methods such as distribution of dampers based on the story shear or the inter-story drift ratio. The seismic performance of the model structure with optimally positioned friction dampers was evaluated by nonlinear static and dynamic analyses. The analysis results showed that compared with the system without friction dampers, the maximum roof displacement and the inter-story drift ratio were reduced by about 30% and 40%, respectively. After installation of the dampers about 70% of the earthquake input energy was dissipated by the dampers and the energy dissipated in the structural elements was reduced by about 50%. In comparison with the simplified methods of installation, the genetic algorithm provided more efficient solutions for seismic retrofit of the model structure.Keywords: friction dampers, genetic algorithm, optimal design, RC buildings
Procedia PDF Downloads 24413954 Malaria Parasite Detection Using Deep Learning Methods
Authors: Kaustubh Chakradeo, Michael Delves, Sofya Titarenko
Abstract:
Malaria is a serious disease which affects hundreds of millions of people around the world, each year. If not treated in time, it can be fatal. Despite recent developments in malaria diagnostics, the microscopy method to detect malaria remains the most common. Unfortunately, the accuracy of microscopic diagnostics is dependent on the skill of the microscopist and limits the throughput of malaria diagnosis. With the development of Artificial Intelligence tools and Deep Learning techniques in particular, it is possible to lower the cost, while achieving an overall higher accuracy. In this paper, we present a VGG-based model and compare it with previously developed models for identifying infected cells. Our model surpasses most previously developed models in a range of the accuracy metrics. The model has an advantage of being constructed from a relatively small number of layers. This reduces the computer resources and computational time. Moreover, we test our model on two types of datasets and argue that the currently developed deep-learning-based methods cannot efficiently distinguish between infected and contaminated cells. A more precise study of suspicious regions is required.Keywords: convolution neural network, deep learning, malaria, thin blood smears
Procedia PDF Downloads 13013953 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods
Authors: Getalem E. Haylia
Abstract:
The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model
Procedia PDF Downloads 18713952 Mobile Wireless Investigation Platform
Authors: Dimitar Karastoyanov, Todor Penchev
Abstract:
The paper presents the research of a kind of autonomous mobile robots, intended for work and adaptive perception in unknown and unstructured environment. The objective are robots, dedicated for multi-sensory environment perception and exploration, like measurements and samples taking, discovering and putting a mark on the objects as well as environment interactions–transportation, carrying in and out of equipment and objects. At that ground classification of the different types mobile robots in accordance with the way of locomotion (wheel- or chain-driven, walking, etc.), used drive mechanisms, kind of sensors, end effectors, area of application, etc. is made. Modular system for the mechanical construction of the mobile robots is proposed. Special PLC on the base of AtMega128 processor for robot control is developed. Electronic modules for the wireless communication on the base of Jennic processor as well as the specific software are developed. The methods, means and algorithms for adaptive environment behaviour and tasks realization are examined. The methods of group control of mobile robots and for suspicious objects detecting and handling are discussed too.Keywords: mobile robots, wireless communications, environment investigations, group control, suspicious objects
Procedia PDF Downloads 35613951 Preserving Heritage in the Face of Natural Disasters: Lessons from the Bam Experience in Iran
Authors: Mohammad Javad Seddighi, Avar Almukhtar
Abstract:
The occurrence of natural disasters, such as floods and earthquakes, can cause significant damage to heritage sites and surrounding areas. In Iran, the city of Bam was devastated by an earthquake in 2003, which had a major impact on the rivers and watercourses around the city. This study aims to investigate the environmental design techniques and sustainable hazard mitigation strategies that can be employed to preserve heritage sites in the face of natural disasters, using the Bam experience as a case study. The research employs a mixed-methods approach, combining both qualitative and quantitative data collection and analysis methods. The study begins with a comprehensive literature review of recent publications on environmental design techniques and sustainable hazard mitigation strategies in heritage conservation. This is followed by a field study of the rivers and watercourses around Bam, including the Adoori River (Talangoo) and other watercourses, to assess the current conditions and identify potential hazards. The data collected from the field study is analysed using statistical methods and GIS mapping techniques. The findings of this study reveal the importance of sustainable hazard mitigation strategies and environmental design techniques in preserving heritage sites during natural disasters. The study suggests that these techniques can be used to prevent the outbreak of another natural disaster in Bam and the surrounding areas. Specifically, the study recommends the establishment of a comprehensive early warning system, the creation of flood-resistant landscapes, and the use of eco-friendly building materials in the reconstruction of heritage sites. These findings contribute to the current knowledge of sustainable hazard mitigation and environmental design in heritage conservation.Keywords: natural disasters, heritage conservation, sustainable hazard mitigation, environmental design, landscape architecture, flood management, disaster resilience
Procedia PDF Downloads 8813950 Models, Methods and Technologies for Protection of Critical Infrastructures from Cyber-Physical Threats
Authors: Ivan Župan
Abstract:
Critical infrastructure is essential for the functioning of a country and is designated for special protection by governments worldwide. Due to the increase in smart technology usage in every facet of the industry, including critical infrastructure, the exposure to malicious cyber-physical attacks has grown in the last few years. Proper security measures must be undertaken in order to defend against cyber-physical threats that can disrupt the normal functioning of critical infrastructure and, consequently the functioning of the country. This paper provides a review of the scientific literature of models, methods and technologies used to protect from cyber-physical threats in industries. The focus of the literature was observed from three aspects. The first aspect, resilience, concerns itself with the robustness of the system’s defense against threats, as well as preparation and education about potential future threats. The second aspect concerns security risk management for systems with cyber-physical aspects, and the third aspect investigates available testbed environments for testing developed models on scaled models of vulnerable infrastructure.Keywords: critical infrastructure, cyber-physical security, smart industry, security methodology, security technology
Procedia PDF Downloads 7713949 Alternative Method of Determining Seismic Loads on Buildings Without Response Spectrum Application
Authors: Razmik Atabekyan, V. Atabekyan
Abstract:
This article discusses a new alternative method for determination of seismic loads on buildings, based on resistance of structures to deformations of vibrations. The basic principles for determining seismic loads by spectral method were developed in 40… 50ies of the last century and further have been improved to pursuit true assessments of seismic effects. The base of the existing methods to determine seismic loads is response spectrum or dynamicity coefficient β (norms of RF), which are not definitively established. To this day there is no single, universal method for the determination of seismic loads and when trying to apply the norms of different countries, significant discrepancies between the results are obtained. On the other hand there is a contradiction of the results of macro seismic surveys of strong earthquakes with the principle of the calculation based on accelerations. It is well-known, on soft soils there is an increase of destructions (mainly due to large displacements), even though the accelerations decreases. Obviously, the seismic impacts are transmitted to the building through foundation, but paradoxically, the existing methods do not even include foundation data. Meanwhile acceleration of foundation of the building can differ several times from the acceleration of the ground. During earthquakes each building has its own peculiarities of behavior, depending on the interaction between the soil and the foundations, their dynamic characteristics and many other factors. In this paper we consider a new, alternative method of determining the seismic loads on buildings, without the use of response spectrum. The following main conclusions: 1) Seismic loads are revealed at the foundation level, which leads to redistribution and reduction of seismic loads on structures. 2) The proposed method is universal and allows determine the seismic loads without the use of response spectrum and any implicit coefficients. 3) The possibility of taking into account important factors such as the strength characteristics of the soils, the size of the foundation, the angle of incidence of the seismic ray and others. 4) Existing methods can adequately determine the seismic loads on buildings only for first form of vibrations, at an average soil conditions.Keywords: seismic loads, response spectrum, dynamic characteristics of buildings, momentum
Procedia PDF Downloads 50513948 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method
Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri
Abstract:
Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.Keywords: unsharp masking, blur image, sub-region gradient, image enhancement
Procedia PDF Downloads 21413947 Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data
Authors: Muthukumarasamy Govindarajan
Abstract:
Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier.Keywords: accuracy, arcing, bagging, genetic algorithm, Naive Bayes, sentiment mining, support vector machine
Procedia PDF Downloads 14213946 Identifying Factors Contributing to the Spread of Lyme Disease: A Regression Analysis of Virginia’s Data
Authors: Fatemeh Valizadeh Gamchi, Edward L. Boone
Abstract:
This research focuses on Lyme disease, a widespread infectious condition in the United States caused by the bacterium Borrelia burgdorferi sensu stricto. It is critical to identify environmental and economic elements that are contributing to the spread of the disease. This study examined data from Virginia to identify a subset of explanatory variables significant for Lyme disease case numbers. To identify relevant variables and avoid overfitting, linear poisson, and regularization regression methods such as a ridge, lasso, and elastic net penalty were employed. Cross-validation was performed to acquire tuning parameters. The methods proposed can automatically identify relevant disease count covariates. The efficacy of the techniques was assessed using four criteria on three simulated datasets. Finally, using the Virginia Department of Health’s Lyme disease data set, the study successfully identified key factors, and the results were consistent with previous studies.Keywords: lyme disease, Poisson generalized linear model, ridge regression, lasso regression, elastic net regression
Procedia PDF Downloads 13713945 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 29713944 A Family of Second Derivative Methods for Numerical Integration of Stiff Initial Value Problems in Ordinary Differential Equations
Authors: Luke Ukpebor, C. E. Abhulimen
Abstract:
Stiff initial value problems in ordinary differential equations are problems for which a typical solution is rapidly decaying exponentially, and their numerical investigations are very tedious. Conventional numerical integration solvers cannot cope effectively with stiff problems as they lack adequate stability characteristics. In this article, we developed a new family of four-step second derivative exponentially fitted method of order six for the numerical integration of stiff initial value problem of general first order differential equations. In deriving our method, we employed the idea of breaking down the general multi-derivative multistep method into predator and corrector schemes which possess free parameters that allow for automatic fitting into exponential functions. The stability analysis of the method was discussed and the method was implemented with numerical examples. The result shows that the method is A-stable and competes favorably with existing methods in terms of efficiency and accuracy.Keywords: A-stable, exponentially fitted, four step, predator-corrector, second derivative, stiff initial value problems
Procedia PDF Downloads 25813943 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index
Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad
Abstract:
Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.Keywords: indicators, aggregation, principle component analysis, weighting, index score
Procedia PDF Downloads 15813942 R Data Science for Technology Management
Authors: Sunghae Jun
Abstract:
Technology management (TM) is important issue in a company improving the competitiveness. Among many activities of TM, technology analysis (TA) is important factor, because most decisions for management of technology are decided by the results of TA. TA is to analyze the developed results of target technology using statistics or Delphi. TA based on Delphi is depended on the experts’ domain knowledge, in comparison, TA by statistics and machine learning algorithms use objective data such as patent or paper instead of the experts’ knowledge. Many quantitative TA methods based on statistics and machine learning have been studied, and these have been used for technology forecasting, technological innovation, and management of technology. They applied diverse computing tools and many analytical methods case by case. It is not easy to select the suitable software and statistical method for given TA work. So, in this paper, we propose a methodology for quantitative TA using statistical computing software called R and data science to construct a general framework of TA. From the result of case study, we also show how our methodology is applied to real field. This research contributes to R&D planning and technology valuation in TM areas.Keywords: technology management, R system, R data science, statistics, machine learning
Procedia PDF Downloads 45813941 An Alternative Method for Computing Clothoids
Authors: Gerardo Casal, Miguel E. Vázquez-Méndez
Abstract:
The clothoid (also known as Cornu spiral or Euler spiral) is a curve that is characterized because its curvature is proportional to its length. This property makes that it would be widely used as transition curve for designing the layout of roads and railway tracks. In this work, from the geometrical property characterizing the clothoid, its parametric equations are obtained and two algorithms to compute it are compared. The first (classical), is widely used in Surveying Schools and it is based on the use of explicit formulas obtained from Taylor expansions of sine and cosine functions. The second one (alternative) is a very simple algorithm, based on the numerical solution of the initial value problems giving the clothoid parameterization. Both methods are compared in some typical surveying problems. The alternative method does not use complex formulas and so it is conceptually very simple and easy to apply. It gives good results, even if the classical method goes wrong (if the quotient between length and radius of curvature is high), needs no subsequent translations nor rotations and, consequently, it seems an efficient tool for designing the layout of roads and railway tracks.Keywords: transition curves, railroad and highway engineering, Runge-Kutta methods
Procedia PDF Downloads 28313940 Low-Voltage Multiphase Brushless DC Motor for Electric Vehicle Application
Authors: Mengesha Mamo Wogari
Abstract:
In this paper, low voltage multiphase brushless DC motor with square wave air-gap flux distribution for electric vehicle application is proposed. Ten-phase, 5 kW motor, has been designed and simulated by finite element methods demonstrating the desired high torque capability at low speed and flux weakening operation for high-speed operations. The motor torque is proportional to number of phases for a constant phase current and air-gap flux. The concept of vector control and simple space vector modulation technique is used on MATLAB to control the motor demonstrating simple switching pattern for selected number of phases. The low voltage DC and inverter output AC are desired characteristics to avoid any electric shock in the vehicle, accidentally and during abnormal conditions. The switching devices for inverter are of low-voltage rating and cost effective though their number is equal to twice the number of phases.Keywords: brushless DC motors, electric Vehicle, finite element methods, Low-voltage inverter, multiphase
Procedia PDF Downloads 15413939 The Use of Artificial Intelligence to Harmonization in the Lawmaking Process
Authors: Supriyadi, Andi Intan Purnamasari, Aminuddin Kasim, Sulbadana, Mohammad Reza
Abstract:
The development of the Industrial Revolution Era 4.0 brought a significant influence in the administration of countries in all parts of the world, including Indonesia, not only in the administration and economic sectors but the ways and methods of forming laws should also be adjusted. Until now, the process of making laws carried out by the Parliament with the Government still uses the classical method. The law-making process still uses manual methods, such as typing harmonization of regulations, so that it is not uncommon for errors to occur, such as writing errors, copying articles and so on, things that require a high level of accuracy and relying on inventory and harmonization carried out manually by humans. However, this method often creates several problems due to errors and inaccuracies on the part of officers who harmonize laws after discussion and approval; this has a very serious impact on the system of law formation in Indonesia. The use of artificial intelligence in the process of forming laws seems to be justified and becomes the answer in order to minimize the disharmony of various laws and regulations. This research is normative research using the Legislative Approach and the Conceptual Approach. This research focuses on the question of how to use Artificial Intelligence for Harmonization in the Lawmaking Process.Keywords: artificial intelligence, harmonization, laws, intelligence
Procedia PDF Downloads 161