Search results for: tracking code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2250

Search results for: tracking code

570 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis

Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn

Abstract:

Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.

Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics

Procedia PDF Downloads 157
569 Utilizing Minecraft Java Edition for the Application of Fire Disaster Procedures to Establish Fire Disaster Readiness for Grade 12 STEM students of DLSU-IS

Authors: Aravella Flores, Jose Rafael E. Sotelo, Luis Romulus Phillippe R. Javier, Josh Christian V. Nunez

Abstract:

This study focuses on analyzing the performance of Grade 12 STEM students of De La Salle University - Integrated School that has completed the Disaster Readiness and Risk Reduction course in handling fire hazards through Minecraft Java Edition. This platform is suitable because fire DRRR is challenging to learn in a practical setting as well as questionable with regard to supplementing the successful implementation of textbook knowledge into actual practice. The purpose of this study is to acknowledge whether Minecraft can be a suitable environment to familiarize oneself to fire DRRR. The objectives are achieved through utilizing Minecraft in simulating fire scenarios which allows the participants to freely act upon and practice fire DRRR. The experiment was divided into the grounding and validation phase, where researchers observed the performance of the participants in the simulation. A pre-simulation and post-simulation survey was given to acknowledge the change in participants’ perception of being able to utilize fire DRRR procedures and their vulnerabilities. The paired t-test was utilized, showing significant differences in the pre-simulation and post-simulation survey scores, thus, insinuating improved judgment of DRRR, lessening their vulnerabilities in the possibility of encountering a fire hazard. This research poses a model for future research which can gather more participants and dwell on more complex codes outside just command blocks and into the code lines of Minecraft itself.

Keywords: minecraft, DRRR, fire, disaster, simulation

Procedia PDF Downloads 130
568 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 342
567 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 192
566 The Influence of Immunity on the Behavior and Dignity of Judges

Authors: D. Avnieli

Abstract:

Immunity of judges from liability represents a departure from the principle that all are equal under the law, and that victims may be granted compensation from their offenders. The purpose of the study is to determine if judicial immunity coincides with the need to ensure the existence of highly independent and incorruptible judiciary. Judges are immune from civil and criminal liability for their judicial acts. Judicial immunity is justified by the need to maintain complete independence and discretion of the judiciary. Scholars and judges believe that absolute immunity is needed to shield judges from pressures, threats, or outside interference. It is commonly accepted, that judges should be free to perform their judicial role in accordance with their assessment of the fact and their understanding of the law, without any restrictions, influences, inducements or interferences. In most countries, immunity applies when judges act in excess of jurisdiction. In some countries, it applies even when they act maliciously or corruptly. The only exception to absolute immunity applicable in all judicial systems is when judges act without jurisdiction over the subject matter. The Israeli Supreme Court recently decided to embrace absolute immunity and strike off a lawsuit of a refugee, who was unlawfully incarcerated. The Court ruled that the plaintiff cannot sue the State or the judge for damages. The questions of malice, dignity, and public scrutiny were not discussed. This paper, based on comparative analysis of many cases, aims to determine if immunity affects the dignity and behavior of judges. It demonstrates that most judges maintain their dignity and ethical code of behavior, but sometimes do not hesitate to act consciously in excess of jurisdiction, and in rare cases even corruptly. Therefore, in order to maintain independent and incorruptible judiciary, immunity should not be applied where judges act consciously in excess of jurisdiction or with malicious incentives.

Keywords: incorruptible judiciary, immunity, independent, judicial, judges, jurisdiction

Procedia PDF Downloads 100
565 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 104
564 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 214
563 Enhancing Health Information Management with Smart Rings

Authors: Bhavishya Ramchandani

Abstract:

A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.

Keywords: smart ring, healthcare, electronic wearable, emerging technology

Procedia PDF Downloads 58
562 Herbal Medicinal Materials for Health/Functional Foods in Korea

Authors: Chang-Hwan Oh, Young-Jong Lee

Abstract:

In April, 2015, the Ministry of Food and Drug Safety’s announcement that only 10 of the 207 products that list Cynanchum Wilfordii Radix among their ingredients were confirmed to actually contain “iyeobupiso” the counterfeit version of the “baeksuo” raised a fog to consumers who purchased health/functional foods supposedly containing the herbal medicinal material, “baeksuo” in Korean. Baeksuo is the main ingredient of the product “EstroG-100” that contain Phlomis umbrosa and Angelica gigas too (NaturalEndoTech, S.Korea). The hot water extract of the herbal medicinal materials (HMM) was approved as a product specific Health/Functional Food (HFF) having a helpful function to women reaching menopause by Korea Food & Drug Administration (Ministry of Food & Drug Safety at present). The origin of “baeksuo” is the root of Cynanchum wilfordii Hemsley in Korea (But “iyeobupiso, the root of Cynanchum auriculatum Royle ex Wight is considered as the origin of “baeksuo” in China). In Korea, about 116 HMMs are listed as the food materials in Korea Food Code among the total 187 HMMs could be used for food and medicine purpose simultaneously. But there are some chances of the HMMs (shared use for food and medicine purpose) could be misused by the part and HMMs not permitted for HFF such as the “baeksuo” case. In this study, some of HMMs (shared use for food and medicine purpose) are examined to alleviate the misuse chance of HMMs for HFFs in Korea. For the purpose of this study, the origin, shape, edible parts, efficacy and the side effects of the similar HMMs to be misused for HFF are investigated.

Keywords: herbal medicinal materials, healthy/functional foods, misuse, shared use

Procedia PDF Downloads 287
561 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom). The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: accounting standards, earnings management, international financial reporting standards, loss avoidance, reporting quality

Procedia PDF Downloads 195
560 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 130
559 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 280
558 Planning a Haemodialysis Process by Minimum Time Control of Hybrid Systems with Sliding Motion

Authors: Radoslaw Pytlak, Damian Suski

Abstract:

The aim of the paper is to provide a computational tool for planning a haemodialysis process. It is shown that optimization methods can be used to obtain the most effective treatment focused on removing both urea and phosphorus during the process. In order to achieve that, the IV–compartment model of phosphorus kinetics is applied. This kinetics model takes into account a rebound phenomenon that can occur during haemodialysis and results in a hybrid model of the process. Furthermore, vector fields associated with the model equations are such that it is very likely that using the most intuitive objective functions in the planning problem could lead to solutions which include sliding motions. Therefore, building computational tools for solving the problem of planning a haemodialysis process has required constructing numerical algorithms for solving optimal control problems with hybrid systems. The paper concentrates on minimum time control of hybrid systems since this control objective is the most suitable for the haemodialysis process considered in the paper. The presented approach to optimal control problems with hybrid systems is different from the others in several aspects. First of all, it is assumed that a hybrid system can exhibit sliding modes. Secondly, the system’s motion on the switching surface is described by index 2 differential–algebraic equations, and that guarantees accurate tracking of the sliding motion surface. Thirdly, the gradients of the problem’s functionals are evaluated with the help of adjoint equations. The adjoint equations presented in the paper take into account sliding motion and exhibit jump conditions at transition times. The optimality conditions in the form of the weak maximum principle for optimal control problems with hybrid systems exhibiting sliding modes and with piecewise constant controls are stated. The presented sensitivity analysis can be used to construct globally convergent algorithms for solving considered problems. The paper presents numerical results of solving the haemodialysis planning problem.

Keywords: haemodialysis planning process, hybrid systems, optimal control, sliding motion

Procedia PDF Downloads 191
557 Large Eddy Simulation with Energy-Conserving Schemes: Understanding Wind Farm Aerodynamics

Authors: Dhruv Mehta, Alexander van Zuijlen, Hester Bijl

Abstract:

Large Eddy Simulation (LES) numerically resolves the large energy-containing eddies of a turbulent flow, while modelling the small dissipative eddies. On a wind farm, these large scales carry the energy wind turbines extracts and are also responsible for transporting the turbines’ wakes, which may interact with downstream turbines and certainly with the atmospheric boundary layer (ABL). In this situation, it is important to conserve the energy that these wake’s carry and which could be altered artificially through numerical dissipation brought about by the schemes used for the spatial discretisation and temporal integration. Numerical dissipation has been reported to cause the premature recovery of turbine wakes, leading to an over prediction in the power produced by wind farms.An energy-conserving scheme is free from numerical dissipation and ensures that the energy of the wakes is increased or decreased only by the action of molecular viscosity or the action of wind turbines (body forces). The aim is to create an LES package with energy-conserving schemes to simulate wind turbine wakes correctly to gain insight into power-production, wake meandering etc. Such knowledge will be useful in designing more efficient wind farms with minimal wake interaction, which if unchecked could lead to major losses in energy production per unit area of the wind farm. For their research, the authors intend to use the Energy-Conserving Navier-Stokes code developed by the Energy Research Centre of the Netherlands.

Keywords: energy-conserving schemes, modelling turbulence, Large Eddy Simulation, atmospheric boundary layer

Procedia PDF Downloads 463
556 Environmental Effect on Corrosion Fatigue Behaviors of Steam Generator Forging in Simulated Pressurized Water Reactor Environment

Authors: Yakui Bai, Chen Sun, Ke Wang

Abstract:

An experimental investigation of environmental effect on fatigue behavior in SA508 Gr.3 Cl.2 Steam Generator Forging CAP1400 nuclear power plant has been carried out. In order to simulate actual loading condition, a range of strain amplitude was applied in different low cycle fatigue (LCF) tests. The current American Society of Mechanical Engineers (ASME) design fatigue code does not take full account of the interactions of environmental, loading, and material's factors. A range of strain amplitude was applied in different low cycle fatigue (LCF) tests at a strain rate of 0.01%s⁻¹. A design fatigue model was constructed by taking environmentally assisted fatigue effects into account, and the corresponding design curves were given for the convenience of engineering applications. The corrosion fatigue experiment was performed in a strain control mode in 320℃ borated and lithiated water environment to evaluate the effects of a mixed environment on fatigue life. Stress corrosion cracking (SCC) in steam generator large forging in primary water of pressurized water reactor was also observed. In addition, it is found that the CF life of SA508 Gr.3 Cl.2 decreases with increasing temperature in the water environment. The relationship between the reciprocal of temperature and the logarithm of fatigue life was found to be linear. Through experiments and subsequent analysis, the mechanisms of reduced low cycle fatigue life have been investigated for steam generator forging.

Keywords: failure behavior, low alloy steel, steam generator forging, stress corrosion cracking

Procedia PDF Downloads 122
555 A Three-Dimensional (3D) Numerical Study of Roofs Shape Impact on Air Quality in Urban Street Canyons with Tree Planting

Authors: Bouabdellah Abed, Mohamed Bouzit, Lakhdar Bouarbi

Abstract:

The objective of this study is to investigate numerically the effect of roof shaped on wind flow and pollutant dispersion in a street canyon with one row of trees of pore volume, Pvol = 96%. A three-dimensional computational fluid dynamics (CFD) model for evaluating air flow and pollutant dispersion within an urban street canyon using Reynolds-averaged Navier–Stokes (RANS) equations and the k-Epsilon EARSM turbulence model as close of the equation system. The numerical model is performed with ANSYS-CFX code. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against the wind tunnel experiment. Having established this, the wind flow and pollutant dispersion in urban street canyons of six roof shapes are simulated. The numerical simulation agrees reasonably with the wind tunnel data. The results obtained in this work, indicate that the flow in 3D domain is more complicated, this complexity is increased with presence of tree and variability of the roof shapes. The results also indicated that the largest pollutant concentration level for two walls (leeward and windward wall) is observed with the upwind wedge-shaped roof. But the smallest pollutant concentration level is observed with the dome roof-shaped. The results also indicated that the corners eddies provide additional ventilation and lead to lower traffic pollutant concentrations at the street canyon ends.

Keywords: street canyon, pollutant dispersion, trees, building configuration, numerical simulation, k-Epsilon EARSM

Procedia PDF Downloads 359
554 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 142
553 Model of Application of Blockchain Technology in Public Finances

Authors: M. Vlahovic

Abstract:

This paper presents a model of public finances, which combines three concepts: participatory budgeting, crowdfunding and blockchain technology. Participatory budgeting is defined as a process in which community members decide how to spend a part of community’s budget. Crowdfunding is a practice of funding a project by collecting small monetary contributions from a large number of people via an Internet platform. Blockchain technology is a distributed ledger that enables efficient and reliable transactions that are secure and transparent. In this hypothetical model, the government or authorities on local/regional level would set up a platform where they would propose public projects to citizens. Citizens would browse through projects and support or vote for those which they consider justified and necessary. In return, they would be entitled to a tax relief in the amount of their monetary contribution. Since the blockchain technology enables tracking of transactions, it can be used to mitigate corruption, money laundering and lack of transparency in public finances. Models of its application have already been created for e-voting, health records or land registries. By presenting a model of application of blockchain technology in public finances, this paper takes into consideration the potential of blockchain technology to disrupt governments and make processes more democratic, secure, transparent and efficient. The framework for this paper consists of multiple streams of research, including key concepts of direct democracy, public finance (especially the voluntary theory of public finance), information and communication technology, especially blockchain technology and crowdfunding. The framework defines rules of the game, basic conditions for the implementation of the model, benefits, potential problems and development perspectives. As an oversimplified map of a new form of public finances, the proposed model identifies primary factors, that influence the possibility of implementation of the model, and that could be tracked, measured and controlled in case of experimentation with the model.

Keywords: blockchain technology, distributed ledger, participatory budgeting, crowdfunding, direct democracy, internet platform, e-government, public finance

Procedia PDF Downloads 146
552 Behavior of the RC Slab Subjected to Impact Loading According to the DIF

Authors: Yong Jae Yu, Jae-Yeol Cho

Abstract:

In the design of structural concrete for impact loading, design or model codes often employ a dynamic increase factor (DIF) to impose dynamic effect on static response. Dynamic increase factors that are obtained from laboratory material test results and that are commonly given as a function of strain rate only are quite different from each other depending on the design concept of design codes like ACI 349M-06, fib Model Code 2010 and ACI 370R-14. Because the dynamic increase factors currently adopted in the codes are too simple and limited to consider a variety of strength of materials, their application in practical design is questionable. In this study, the dynamic increase factors used in the three codes were validated through the finite element analysis of reinforced concrete slab elements which were tested and reported by other researcher. The test was intended to simulate a wall element of the containment building in nuclear power plants that is assumed to be subject to impact scenario that the Pentagon experienced on September 11, 2001. The finite element analysis was performed using the ABAQAUS 6.10 and the plasticity models were employed for the concrete, reinforcement. The dynamic increase factors given in the three codes were applied to the stress-strain curves of the materials. To estimate the dynamic increase factors, strain rate was adopted as a parameter. Comparison of the test and analysis was done with regard to perforation depth, maximum deflection, and surface crack area of the slab. Consequently, it was found that DIF has so great an effect on the behavior of the reinforced concrete structures that selection of DIF should be very careful. The result implies that DIF should be provided in design codes in more delicate format considering various influence factors.

Keywords: impact, strain rate, DIF, slab elements

Procedia PDF Downloads 289
551 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T_1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: collapse capacity, fragility analysis, spectral shape effects, IDA method

Procedia PDF Downloads 231
550 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast

Authors: Majid Samiee Zenoozian

Abstract:

The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.

Keywords: sediment transport, storm, coast, fluid dynamics

Procedia PDF Downloads 111
549 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data

Authors: Jian-Heng Wu, Bor-Shen Lin

Abstract:

The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.

Keywords: water mass, Gaussian mixture model, data visualization, system framework

Procedia PDF Downloads 138
548 Field Management Solutions Supporting Foreman Executive Tasks

Authors: Maroua Sbiti, Karim Beddiar, Djaoued Beladjine, Romuald Perrault

Abstract:

Productivity is decreasing in construction compared to the manufacturing industry. It seems that the sector is suffering from organizational problems and have low maturity regarding technological advances. High international competition due to the growing context of globalization, complex projects, and shorter deadlines increases these challenges. Field employees are more exposed to coordination problems than design officers. Execution collaboration is then a major issue that can threaten the cost, time, and quality completion of a project. Initially, this paper will try to identify field professional requirements as to address building management process weaknesses such as the unreliability of scheduling, the fickleness of monitoring and inspection processes, the inaccuracy of project’s indicators, inconsistency of building documents and the random logistic management. Subsequently, we will focus our attention on providing solutions to improve scheduling, inspection, and hours tracking processes using emerging lean tools and field mobility applications that bring new perspectives in terms of cooperation. They have shown a great ability to connect various field teams and make informations visual and accessible to planify accurately and eliminate at the source the potential defects. In addition to software as a service use, the adoption of the human resource module of the Enterprise Resource Planning system can allow a meticulous time accounting and thus make the faster decision making. The next step is to integrate external data sources received from or destined to design engineers, logisticians, and suppliers in a holistic system. Creating a monolithic system that consolidates planning, quality, procurement, and resources management modules should be our ultimate target to build the construction industry supply chain.

Keywords: lean, last planner system, field mobility applications, construction productivity

Procedia PDF Downloads 113
547 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes

Authors: Chih-Jer Lin, Jian-Hong Hou

Abstract:

Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.

Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance

Procedia PDF Downloads 143
546 Early Prediction of Diseases in a Cow for Cattle Industry

Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan

Abstract:

In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.

Keywords: IoT, machine learning, health care, dairy cows

Procedia PDF Downloads 66
545 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 237
544 Symmetric Key Encryption Algorithm Using Indian Traditional Musical Scale for Information Security

Authors: Aishwarya Talapuru, Sri Silpa Padmanabhuni, B. Jyoshna

Abstract:

Cryptography helps in preventing threats to information security by providing various algorithms. This study introduces a new symmetric key encryption algorithm for information security which is linked with the "raagas" which means Indian traditional scale and pattern of music notes. This algorithm takes the plain text as input and starts its encryption process. The algorithm then randomly selects a raaga from the list of raagas that is assumed to be present with both sender and the receiver. The plain text is associated with the thus selected raaga and an intermediate cipher-text is formed as the algorithm converts the plain text characters into other characters, depending upon the rules of the algorithm. This intermediate code or cipher text is arranged in various patterns in three different rounds of encryption performed. The total number of rounds in the algorithm is equal to the multiples of 3. To be more specific, the outcome or output of the sequence of first three rounds is again passed as the input to this sequence of rounds recursively, till the total number of rounds of encryption is performed. The raaga selected by the algorithm and the number of rounds performed will be specified at an arbitrary location in the key, in addition to important information regarding the rounds of encryption, embedded in the key which is known by the sender and interpreted only by the receiver, thereby making the algorithm hack proof. The key can be constructed of any number of bits without any restriction to the size. A software application is also developed to demonstrate this process of encryption, which dynamically takes the plain text as input and readily generates the cipher text as output. Therefore, this algorithm stands as one of the strongest tools for information security.

Keywords: cipher text, cryptography, plaintext, raaga

Procedia PDF Downloads 286
543 Artificial Intelligence Protecting Birds against Collisions with Wind Turbines

Authors: Aleksandra Szurlej-Kielanska, Lucyna Pilacka, Dariusz Górecki

Abstract:

The dynamic development of wind energy requires the simultaneous implementation of effective systems minimizing the risk of collisions between birds and wind turbines. Wind turbines are installed in more and more challenging locations, often close to the natural environment of birds. More and more countries and organizations are defining guidelines for the necessary functionality of such systems. The minimum bird detection distance, trajectory tracking, and shutdown time are key factors in eliminating collisions. Since 2020, we have continued the survey on the validation of the subsequent version of the BPS detection and reaction system. Bird protection system (BPS) is a fully automatic camera system which allows one to estimate the distance of the bird to the turbine, classify its size and autonomously undertake various actions depending on the bird's distance and flight path. The BPS was installed and tested in a real environment at a wind turbine in northern Poland and Central Spain. The performed validation showed that at a distance of up to 300 m, the BPS performs at least as well as a skilled ornithologist, and large bird species are successfully detected from over 600 m. In addition, data collected by BPS systems installed in Spain showed that 60% of the detections of all birds of prey were from individuals approaching the turbine, and these detections meet the turbine shutdown criteria. Less than 40% of the detections of birds of prey took place at wind speeds below 2 m/s while the turbines were not working. As shown by the analysis of the data collected by the system over 12 months, the system classified the improved size of birds with a wingspan of more than 1.1 m in 90% and the size of birds with a wingspan of 0.7 - 1 m in 80% of cases. The collected data also allow the conclusion that some species keep a certain distance from the turbines at a wind speed of over 8 m/s (Aquila sp., Buteo sp., Gyps sp.), but Gyps sp. and Milvus sp. remained active at this wind speed on the tested area. The data collected so far indicate that BPS is effective in detecting and stopping wind turbines in response to the presence of birds of prey with a wingspan of more than 1 m.

Keywords: protecting birds, birds monitoring, wind farms, green energy, sustainable development

Procedia PDF Downloads 69
542 A Mixed 3D Finite Element for Highly Deformable Thermoviscoplastic Materials Under Ductile Damage

Authors: João Paulo Pascon

Abstract:

In this work, a mixed 3D finite element formulation is proposed in order to analyze thermoviscoplastic materials under large strain levels and ductile damage. To this end, a tetrahedral element of linear order is employed, considering a thermoviscoplastic constitutive law together with the neo-Hookean hyperelastic relationship and a nonlocal Gurson`s porous plasticity theory The material model is capable of reproducing finite deformations, elastoplastic behavior, void growth, nucleation and coalescence, thermal effects such as plastic work heating and conductivity, strain hardening and strain-rate dependence. The nonlocal character is introduced by means of a nonlocal parameter applied to the Laplacian of the porosity field. The element degrees of freedom are the nodal values of the deformed position, the temperature and the nonlocal porosity field. The internal variables are updated at the Gauss points according to the yield criterion and the evolution laws, including the yield stress of matrix, the equivalent plastic strain, the local porosity and the plastic components of the Cauchy-Green stretch tensor. Two problems involving 3D specimens and ductile damage are numerically analyzed with the developed computational code: the necking problem and a notched sample. The effect of the nonlocal parameter and the mesh refinement is investigated in detail. Results indicate the need of a proper nonlocal parameter. In addition, the numerical formulation can predict ductile fracture, based on the evolution of the fully damaged zone.

Keywords: mixed finite element, large strains, ductile damage, thermoviscoplasticity

Procedia PDF Downloads 88
541 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy

Procedia PDF Downloads 289