Search results for: fast simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6594

Search results for: fast simulation

744 Understanding the Origins of Pesticides Metabolites in Natural Waters through the Land Use, Hydroclimatic Conditions and Water Quality

Authors: Alexis Grandcoin, Stephanie Piel, Estelle Baures

Abstract:

Brittany (France) is an agricultural region, where emerging pollutants are highly at risk to reach water bodies. Among them, pesticides metabolites are frequently detected in surface waters. The Vilaine watershed (11 000 km²) is of great interest, as a large drinking water treatment plant (100 000 m³/day) is located at the extreme downstream of it. This study aims to provide an evaluation of the pesticides metabolites pollution in the Vilaine watershed, and an understanding of their availability, in order to protect the water resource. Hydroclimatic conditions, land use, and water quality parameters controlling metabolites availability are emphasized. Later this knowledge will be used to understand the favoring conditions resulting in metabolites export towards surface water. 19 sampling points have been strategically chosen along the 220 km of the Vilaine river and its 3 main influents. Furthermore, the intakes of two drinking water plants have been sampled, one is located at the extreme downstream of the Vilaine river and the other is the riparian groundwater under the Vilaine river. 5 sampling campaigns with various hydroclimatic conditions have been carried out. Water quality parameters and hydroclimatic conditions have been measured. 15 environmentally relevant pesticides and metabolites have been analyzed. Also, these compounds are recalcitrant to classic water treatment that is why they have been selected. An evaluation of the watershed contamination has been done in 2016-2017. First observations showed that aminomethylphosphonic acid (AMPA) and metolachlor ethanesulfonic acid (MESA) are the most detected compounds in surface waters samples with 100 % and 98 % frequency of detection respectively. They are the main pollutants of the watershed regardless of the hydroclimatic conditions. AMPA concentration in the river strongly increases downstream of Rennes agglomeration (220k inhabitants) and reaches a maximum of 2.3 µg/l in low waters conditions. Groundwater contains mainly MESA, Diuron and metazachlor ESA at concentrations close to limits of quantification (LOQ) (0.02 µg/L). Metolachlor, metazachlor and alachlor due to their fast degradation in soils were found in small amounts (LOQ – 0.2 µg/L). Conversely glyphosate was regularly found during warm and sunny periods up to 0.6 µg/L. Soil uses (agricultural cultures types, urban areas, forests, wastewater treatment plants implementation), water quality parameters, and hydroclimatic conditions have been correlated to pesticides and metabolites concentration in waters. Statistical treatments showed that chloroacetamides metabolites and AMPA behave differently regardless of the hydroclimatic conditions. Chloroacetamides are correlated to each other, to agricultural areas and to typical agricultural tracers as nitrates. They are present in waters the whole year, especially during rainy periods, suggesting important stocks in soils. Also Chloroacetamides are negatively correlated with AMPA, the different forms of phosphorus, and organic matter. AMPA is ubiquitous but strongly correlated with urban areas despite the recent French regulation, restricting glyphosate to agricultural and private uses. This work helps to predict and understand metabolites present in the water resource used to craft drinking water. As the studied metabolites are difficult to remove, this project will be completed by a water treatment part.

Keywords: agricultural watershed, AMPA, metolachlor-ESA, water resource

Procedia PDF Downloads 154
743 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception

Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom

Abstract:

Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.

Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots

Procedia PDF Downloads 184
742 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 156
741 Theoretical Evaluation of Minimum Superheat, Energy and Exergy in a High-Temperature Heat Pump System Operating with Low GWP Refrigerants

Authors: Adam Y. Sulaiman, Donal F. Cotter, Ming J. Huang, Neil J. Hewitt

Abstract:

Suitable low global warming potential (GWP) refrigerants that conform to F-gas regulations are required to extend the operational envelope of high-temperature heat pumps (HTHPs) used for industrial waste heat recovery processes. The thermophysical properties and characteristics of these working fluids need to be assessed to provide a comprehensive understanding of operational effectiveness in HTHP applications. This paper presents the results of a theoretical simulation to investigate a range of low-GWP refrigerants and their suitability to supersede refrigerants HFC-245fa and HFC-365mfc. A steady-state thermodynamic model of a single-stage HTHP with an internal heat exchanger (IHX) was developed to assess system cycle characteristics at temperature ranges between 50 to 80 °C heat source and 90 to 150 °C heat sink. A practical approach to maximize the operational efficiency was examined to determine the effects of regulating minimum superheat within the process and subsequent influence on energetic and exergetic efficiencies. A comprehensive map of minimum superheat across the HTHP operating variables were used to assess specific tipping points in performance at 30 and 70 K temperature lifts. Based on initial results, the refrigerants HCFO-1233zd(E) and HFO-1336mzz(Z) were found to be closely aligned matches for refrigerants HFC-245fa and HFC-365mfc. The overall results show effective performance for HCFO-1233zd(E) occurs between 5-7 K minimum superheat, and HFO-1336mzz(Z) between 18-21 K dependant on temperature lift. This work provides a method to optimize refrigerant selection based on operational indicators to maximize overall HTHPs system performance.

Keywords: high-temperature heat pump, minimum superheat, energy & exergy efficiency, low GWP refrigerants

Procedia PDF Downloads 163
740 J-Integral Method for Assessment of Structural Integrity of a Pressure Vessel

Authors: Karthik K. R, Viswanath V, Asraff A. K

Abstract:

The first stage of a new-generation launch vehicle of ISRO makes use of large pressure vessels made of Aluminium alloy AA2219 to store fuel and oxidizer. These vessels have many weld joints that may contain cracks or crack-like defects during their fabrication. These defects may propagate across the vessel during pressure testing or while in service under the influence of tensile stresses leading to catastrophe. Though ductile materials exhibit significant stable crack growth prior to failure, it is not generally acceptable for an aerospace component. There is a need to predict the initiation of stable crack growth. The structural integrity of the vessel from fracture considerations can be studied by constructing the Failure Assessment Diagram (FAD) that accounts for both brittle fracture and plastic collapse. Critical crack sizes of the pressure vessel may be highly conservative if it is predicted from FAD alone. If the J-R curve for material under consideration is available apriori, the critical crack sizes can be predicted to a certain degree of accuracy. In this paper, a novel approach is proposed to predict the integrity of a weld in a pressure vessel made of AA2219 material. Fracture parameter ‘J-integral’ at the crack front, evaluated through finite element analyses, is used in the new procedure. Based on the simulation of tension tests carried out on SCT specimens by NASA, a cut-off value of J-integral value (J?ᵤₜ_ₒ??) is finalised. For the pressure vessel, J-integral at the crack front is evaluated through FE simulations incorporating different surface cracks at long seam weld in a cylinder and in dome petal welds. The obtained J-integral, at vessel level, is compared with a value of J?ᵤₜ_ₒ??, and the integrity of vessel weld in the presence of the surface crack is firmed up. The advantage of this methodology is that if SCT test data of any metal is available, the critical crack size in hardware fabricated using that material can be predicted to a better level of accuracy.

Keywords: FAD, j-integral, fracture, surface crack

Procedia PDF Downloads 180
739 Viscoelastic Properties of Sn-15%Pb Measured in an Oscillation Test

Authors: Gerardo Sanjuan Sanjuan, Ángel Enrique Chavéz Castellanos

Abstract:

The knowledge of the rheological behavior of partially solidified metal alloy is an important issue when modeling and simulation of die filling in semisolid processes. Many experiments for like steady state, the step change in shear rate tests, shear stress ramps have been carried out leading that semi-solid alloys exhibit shear thinning, thixotropic behavior and yield stress. More advanced investigation gives evidence some viscoelastic features can be observed. The viscoelastic properties of materials are determinate by transient or dynamic methods; unfortunately, sparse information exists about oscillation experiments. The aim of this present work is to use small amplitude oscillatory tests for knowledge properties such as G´ and G´´. These properties allow providing information about materials structure. For this purpose, we investigated tin-lead alloy (Sn-15%Pb) which exhibits a similar microstructure to aluminum alloys and is the classic alloy for semisolid thixotropic studies. The experiments were performed with parallel plates rheometer AR-G2. Initially, the liquid alloy is cooled down to the semisolid range, a specific temperature to guarantee a constant fraction solid. Oscillation was performed within the linear viscoelastic regime with a strain sweep. So, the loss modulus G´´, the storage modulus G´ and the loss angle (δ) was monitored. In addition a frequency sweep at a strain below the critical strain for characterized its structure. This provides more information about the interactions among solid particles on a liquid matrix. After testing, the sample was removed then cooled, sectioned and examined metallographically. These experiments demonstrate that the viscoelasticity is sensitive to the solid fraction, and is strongly influenced by the shape and size of particles solid.

Keywords: rheology, semisolid alloys, thixotropic, viscoelasticity

Procedia PDF Downloads 369
738 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs

Authors: Muhammad Yasir Wadood, Fatemeh Babaeian

Abstract:

By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.

Keywords: band-pass filters, inter-digital filter, microstrip, via-less

Procedia PDF Downloads 149
737 Implementation of Research Papers and Industry Related Experiments by Undergraduate Students in the Field of Automation

Authors: Veena N. Hegde, S. R. Desai

Abstract:

Motivating a heterogeneous group of students towards engagement in research related activities is a challenging task in engineering education. An effort is being made at the Department of Electronics and Instrumentation Engineering, where two courses are taken up on a pilot basis to kindle research interests in students at the undergraduate level. The courses, namely algorithm and system design (ASD) and automation in process control (APC), are selected for experimentation purposes. The task is being accomplished by providing scope for implementation of research papers and proposing solutions for the current industrial problems by the student teams. The course instructors have proposed an alternative assessment tool to evaluate the undergraduate students that involve activities beyond the curriculum. The method was tested for the aforementioned two courses in a particular academic year, and as per the observations, there is a considerable improvement in the number of student engagement towards research in the subsequent years of their undergraduate course. The student groups from the third-year engineering were made to read, implement the research papers, and they were also instructed to develop simulation modules for certain processes aiming towards automation. The target audience being students, were common for both the courses and the students' strength was 30. Around 50% of successful students were given the continued tasks in the subsequent two semesters, and out of 15 students who continued from sixth semesters were able to follow the research methodology well in the seventh and eighth semesters. Further, around 30% of the students out of 15 ended up carrying out project work with a research component involved and were successful in producing four conference papers. The methodology adopted is justified using a sample data set, and the outcomes are highlighted. The quantitative and qualitative results obtained through this study prove that such practices will enhance learning experiences substantially at the undergraduate level.

Keywords: industrial problems, learning experiences, research related activities, student engagement

Procedia PDF Downloads 162
736 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 94
735 Computational Fluid Dynamics Model of Various Types of Rocket Engine Nozzles

Authors: Konrad Pietrykowski, Michal Bialy, Pawel Karpinski, Radoslaw Maczka

Abstract:

The nozzle is an element of the rocket engine in which the conversion of the potential energy of gases generated during combustion into the kinetic energy of the gas stream takes place. The design parameters of the nozzle have a decisive influence on the ballistic characteristics of the engine. Designing a nozzle assembly is, therefore, one of the most responsible stages in developing a rocket engine design. The paper presents the results of the simulation of three types of rocket propulsion nozzles. Calculations were made using CFD (Computational Fluid Dynamics) in ANSYS Fluent software. The next types of nozzles differ in shape. The analysis was made of a conical nozzle, a bell type nozzle with a conical supersonic part and a bell type nozzle. Calculation results are presented in the form of pressure, velocity and kinetic energy distributions of turbulence in the longitudinal section. The courses of these values along the nozzles are also presented. The results show that the cone nozzle generates strong turbulence in the critical section. Which negatively affect the flow of the working medium. In the case of a bell nozzle, the transformation of the wall caused the elimination of flow disturbances in the critical section. This reduces the probability of waves forming before or after the trailing edge. The most sophisticated construction is the bell type nozzle. It allows you to maximize performance without adding extra weight. The bell type nozzle can be used as a starter and auxiliary engine nozzle due to its advantages. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: computational fluid dynamics, nozzle, rocket engine, supersonic flow

Procedia PDF Downloads 151
734 Analysis of Compressive and Tensile Response of Pumpkin Flesh, Peel and Unpeeled Tissues Using Experimental and FEA

Authors: Maryam Shirmohammadi, Prasad K. D. V. Yarlagadda, YuanTong Gu

Abstract:

The mechanical damage on the agricultural crop during and after harvesting can create high volume of damage on tissue. Uniaxial compression and tensile loading were performed on flesh and peel samples of pumpkin. To investigate the structural changes on the tissue, Scanning Electron Microscopy (SEM) was used to capture the cellular structure change before and after loading on tissue for tensile, compression and indentation tests. To obtain required mechanical properties of tissue for the finite element analysis (FEA) model, laser measurement sensors were used to record the lateral displacement of tissue under the compression loading. Uniaxial force versus deformation data were recorded using Universal Testing Machine for both tensile and compression tests. The experimental Results were employed to develop a material model with failure criteria. The results obtained by the simulation were compared with those obtained by experiments. Note that although modelling food materials’ behaviour is not a new concept however, majority of previous studies focused on elastic behaviour and damages under linear limit, this study, however, has developed FEA models for tensile and compressive loading of pumpkin flesh and peel samples using, as the first study, both elastic and elasto-plastic material types. In addition, pumpkin peel and flesh tissues were considered as two different materials with different properties under mechanical loadings. The tensile and compression loadings were used to develop the material model for a composite structure for FEA model of mechanical peeling of pumpkin as a tough skinned vegetable.

Keywords: compressive and tensile response, finite element analysis, poisson’s ratio, elastic modulus, elastic and plastic response, rupture and bio-yielding

Procedia PDF Downloads 327
733 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium

Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.

Keywords: GABA, Lactobacillus, HPLC, dairy sludge

Procedia PDF Downloads 132
732 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado

Authors: Masahiro Suzuki, Nobuyuki Okura

Abstract:

The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.

Keywords: aerodynamic force, experimental method, tornado, train

Procedia PDF Downloads 231
731 Design of a Surveillance Drone with Computer Aided Durability

Authors: Maram Shahad Dana Anfal

Abstract:

This research paper presents the design of a surveillance drone with computer-aided durability and model analyses that provides a cost-effective and efficient solution for various applications. The quadcopter's design is based on a lightweight and strong structure made of materials such as aluminum and titanium, which provide a durable structure for the quadcopter. The structure of this product and the computer-aided durability system are both designed to ensure frequent repairs or replacements, which will save time and money in the long run. Moreover, the study discusses the drone's ability to track, investigate, and deliver objects more quickly than traditional methods, makes it a highly efficient and cost-effective technology. In this paper, a comprehensive analysis of the quadcopter's operation dynamics and limitations is presented. In both simulation and experimental data, the computer-aided durability system and the drone's design demonstrate their effectiveness, highlighting the potential for a variety of applications, such as search and rescue missions, infrastructure monitoring, and agricultural operations. Also, the findings provide insights into possible areas for improvement in the design and operation of the drone. Ultimately, this paper presents a reliable and cost-effective solution for surveillance applications by designing a drone with computer-aided durability and modeling. With its potential to save time and money, increase reliability, and enhance safety, it is a promising technology for the future of surveillance drones. operation dynamic equations have been evaluated successfully for different flight conditions of a quadcopter. Also, CAE modeling techniques have been applied for the modal risk assessment at operating conditions.Stress analysis have been performed under the loadings of the worst-case combined motion flight conditions.

Keywords: drone, material, solidwork, hypermesh

Procedia PDF Downloads 128
730 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 166
729 Development of a Bead Based Fully Automated Mutiplex Tool to Simultaneously Diagnose FIV, FeLV and FIP/FCoV

Authors: Andreas Latz, Daniela Heinz, Fatima Hashemi, Melek Baygül

Abstract:

Introduction: Feline leukemia virus (FeLV), feline immunodeficiency virus (FIV), and feline coronavirus (FCoV) are serious infectious diseases affecting cats worldwide. Transmission of these viruses occurs primarily through close contact with infected cats (via saliva, nasal secretions, faeces, etc.). FeLV, FIV, and FCoV infections can occur in combination and are expressed in similar clinical symptoms. Diagnosis can therefore be challenging: Symptoms are variable and often non-specific. Sick cats show very similar clinical symptoms: apathy, anorexia, fever, immunodeficiency syndrome, anemia, etc. Sample volume for small companion animals for diagnostic purposes can be challenging to collect. In addition, multiplex diagnosis of diseases can contribute to an easier, cheaper, and faster workflow in the lab as well as to the better differential diagnosis of diseases. For this reason, we wanted to develop a new diagnostic tool that utilizes less sample volume, reagents, and consumables than multiplesingleplex ELISA assays Methods: The Multiplier from Dynextechonogies (USA) has been used as platform to develop a Multiplex diagnostic tool for the detection of antibodies against FIV and FCoV/FIP and antigens for FeLV. Multiplex diagnostics. The Dynex®Multiplier®is a fully automated chemiluminescence immunoassay analyzer that significantly simplifies laboratory workflow. The Multiplier®ease-of-use reduces pre-analytical steps by combining the power of efficiently multiplexing multiple assays with the simplicity of automated microplate processing. Plastic beads have been coated with antigens for FIV and FCoV/FIP, as well as antibodies for FeLV. Feline blood samples are incubated with the beads. Read out of results is performed via chemiluminescence Results: Bead coating was optimized for each individual antigen or capture antibody and then combined in the multiplex diagnostic tool. HRP: Antibody conjugates for FIV and FCoV antibodies, as well as detection antibodies for FeLV antigen, have been adjusted and mixed. 3 individual prototyple batches of the assay have been produced. We analyzed for each disease 50 well defined positive and negative samples. Results show an excellent diagnostic performance of the simultaneous detection of antibodies or antigens against these feline diseases in a fully automated system. A 100% concordance with singleplex methods like ELISA or IFA can be observed. Intra- and Inter-Assays showed a high precision of the test with CV values below 10% for each individual bead. Accelerated stability testing indicate a shelf life of at least 1 year. Conclusion: The new tool can be used for multiplex diagnostics of the most important feline infectious diseases. Only a very small sample volume is required. Fully automation results in a very convenient and fast method for diagnosing animal diseases.With its large specimen capacity to process over 576 samples per 8-hours shift and provide up to 3,456 results, very high laboratory productivity and reagent savings can be achieved.

Keywords: Multiplex, FIV, FeLV, FCoV, FIP

Procedia PDF Downloads 96
728 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm

Authors: Frodouard Minani

Abstract:

Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.

Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks

Procedia PDF Downloads 138
727 Office Workspace Design for Policewomen in Assam, India: Applications for Developing Countries

Authors: Shilpi Bora, Abhirup Chatterjee, Debkumar Chakrabarti

Abstract:

Organizations of all the sectors around the world are increasingly revisiting their workplace strategies with due concern for women working therein. Limited office space and rigid work arrangements contribute to lesser job satisfaction and greater work impoundments for any organization. Flexible workspace strategies are indispensable to accommodate the progressive rise of modular workstations and involvement of women. Today’s generation of employees deserves malleable office environments with employee-friendly job conditions and strategies. The workplace nowadays stands on rapid organizational changes in progressive and flexible work culture. Occupational well-being practices need to keep pace with the rapid changes in office-based work. Working at the office (workspace) with awkward postures or for long periods can cause pain, discomfort, and injury. The world is stirring towards the era of globalization and progress. The 4000 women police personnel constitute less than one per cent of the total police strength of India. Lots of innovative fields are growing fast, and it is important that we should accommodate women in those arenas. The timeworn trends should be set apart to set out for fresh opportunities and possibilities of development and success through more involvement of women in the workplace. The notion of women policing is gaining position throughout the world, and various countries are putting solemn efforts to mainstream women in policing. As the role of women policing in a society is budding, and thus it is also notable that the accessibility of women at general police stations should be considered. Accordingly, the impact of workspace at police station on the employee productivity has been widely deliberated as a crucial contributor to employee satisfaction leading to better functional motivation. Thus the present research aimed to look into the office workstation design of police station with reference to womanhood specific issues to uplift occupational wellbeing of the policewomen. Personal interview and individual responses collected through administering to a subjective assessment questionnaire on thirty women police as well as to have their views on these issues by purposive non-probability sampling of women police personnel of different ranks posted in Guwahati, Assam, India. Scrutiny of the collected data revealed that office design has a substantial impact on the policewomen job satisfaction in the police station. In this study, the workspace was designed in such a way that the set of factors would impact on the individual to ensure increased productivity. Office design such as furniture, noise, temperature, lighting and spatial arrangement were considered. The primary feature which affected the productivity of policewomen was the furniture used in the workspace, which was found to disturb the everyday and overall productivity of policewomen. Therefore, it was recommended to have proper and adequate ergonomics design intervention to improve the office design for better performance. This type of study is today’s need-of-the-hour to empower women and facilitate their inner talent to come up in service of the nation. The office workspace design also finds critical importance at several other occupations also – where office workstation needs further improvement.

Keywords: office workspace design, policewomen, womanhood concerns at workspace, occupational wellbeing

Procedia PDF Downloads 221
726 Experimental Simulations of Aerosol Effect to Landfalling Tropical Cyclones over Philippine Coast: Virtual Seeding Using WRF Model

Authors: Bhenjamin Jordan L. Ona

Abstract:

Weather modification is an act of altering weather systems that catches interest on scientific studies. Cloud seeding is a common form of weather alteration. On the same principle, tropical cyclone mitigation experiment follows the methods of cloud seeding with intensity to account for. This study will present the effects of aerosol to tropical cyclone cloud microphysics and intensity. The framework of Weather Research and Forecasting (WRF) model incorporated with Thompson aerosol-aware scheme is the prime host to support the aerosol-cloud microphysics calculations of cloud condensation nuclei (CCN) ingested into the tropical cyclones before making landfall over the Philippine coast. The coupled microphysical and radiative effects of aerosols will be analyzed using numerical data conditions of Tropical Storm Ketsana (2009), Tropical Storm Washi (2011), and Typhoon Haiyan (2013) associated with varying CCN number concentrations per simulation per typhoon: clean maritime, polluted, and very polluted having 300 cm-3, 1000 cm-3, and 2000 cm-3 aerosol number initial concentrations, respectively. Aerosol species like sulphates, sea salts, black carbon, and organic carbon will be used as cloud nuclei and mineral dust as ice nuclei (IN). To make the study as realistic as possible, investigation during the biomass burning due to forest fire in Indonesia starting October 2015 as Typhoons Mujigae/Kabayan and Koppu/Lando had been seeded with aerosol emissions mainly comprises with black carbon and organic carbon, will be considered. Emission data that will be used is from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). The physical mechanism/s of intensification or deintensification of tropical cyclones will be determined after the seeding experiment analyses.

Keywords: aerosol, CCN, IN, tropical cylone

Procedia PDF Downloads 292
725 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study

Authors: Faris Tarlochan, Siva Mahesh Tangutooru

Abstract:

Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.

Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses

Procedia PDF Downloads 268
724 In silico Analysis towards Identification of Host-Microbe Interactions for Inflammatory Bowel Disease Linked to Reactive Arthritis

Authors: Anukriti Verma, Bhawna Rathi, Shivani Sharda

Abstract:

Reactive Arthritis (ReA) is a disorder that causes inflammation in joints due to certain infections at distant sites in the body. ReA begins with stiffness, pain, and inflammation in these areas especially the ankles, knees, and hips. It gradually causes several complications such as conjunctivitis in the eyes, skin lesions in hand, feet and nails and ulcers in the mouth. Nowadays the diagnosis of ReA is based upon a differential diagnosis pattern. The parameters for differentiating ReA from other similar disorders include physical examination, history of the patient and a high index of suspicion. There are no standard lab tests or markers available for ReA hence the early diagnosis of ReA becomes difficult and the chronicity of disease increases with time. It is reported that enteric disorders such as Inflammatory Bowel Disease (IBD) that is inflammation in gastrointestinal tract namely Crohn’s Disease (CD) and Ulcerative Colitis (UC) are reported to be linked with ReA. Several microorganisms are found such as Campylobacter, Salmonella, Shigella and Yersinia causing IBD leading to ReA. The aim of our study was to perform the in-silico analysis in order to find interactions between microorganisms and human host causing IBD leading to ReA. A systems biology approach for metabolic network reconstruction and simulation was used to find the essential genes of the reported microorganisms. Interactomics study was used to find the interactions between the pathogen genes and human host. Genes such as nhaA (pathogen), dpyD (human), nagK (human) and kynU (human) were obtained that were analysed further using the functional, pathway and network analysis. These genes can be used as putative drug targets and biomarkers in future for early diagnosis, prevention, and treatment of IBD leading to ReA.

Keywords: drug targets, inflammatory bowel disease, reactive arthritis, systems biology

Procedia PDF Downloads 271
723 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 194
722 Design, Development and Analysis of Combined Darrieus and Savonius Wind Turbine

Authors: Ashish Bhattarai, Bishnu Bhatta, Hem Raj Joshi, Nabin Neupane, Pankaj Yadav

Abstract:

This report concerns the design, development, and analysis of the combined Darrieus and Savonius wind turbine. Vertical Axis Wind Turbines (VAWT's) are of two type's viz. Darrieus (lift type) and Savonius (drag type). The problem associated with Darrieus is the lack of self-starting while Savonius has low efficiency. There are 3 straight Darrieus blades having the cross-section of NACA(National Advisory Committee of Aeronautics) 0018 placed circumferentially and a helically twisted Savonius blade to get even torque distribution. This unique design allows the use of Savonius as a method of self-starting the wind turbine, which the Darrieus cannot achieve on its own. All the parts of the wind turbine are designed in CAD software, and simulation data were obtained via CFD(Computational Fluid Dynamics) approach. Also, the design was imported to FlashForge Finder to 3D print the wind turbine profile and finally, testing was carried out. The plastic material used for Savonius was ABS(Acrylonitrile Butadiene Styrene) and that for Darrieus was PLA(Polylactic Acid). From the data obtained experimentally, the hybrid VAWT so fabricated has been found to operate at the low cut-in speed of 3 m/s and maximum power output has been found to be 7.5537 watts at the wind speed of 6 m/s. The maximum rpm of the rotor blade is recorded to be 431 rpm(rotation per minute) at the wind velocity of 6 m/s, signifying its potentiality of wind power production. Besides, the data so obtained from both the process when analyzed through graph plots has shown the similar nature slope wise. Also, the difference between the experimental and theoretical data obtained has shown mechanical losses. The objective is to eliminate the need for external motors for self-starting purposes and study the performance of the model. The testing of the model was carried out for different wind velocities.

Keywords: VAWT, Darrieus, Savonius, helical blades, CFD, flash forge finder, ABS, PLA

Procedia PDF Downloads 199
721 A Study on Factors Affecting (Building Information Modelling) BIM Implementation in European Renovation Projects

Authors: Fatemeh Daneshvartarigh

Abstract:

New technologies and applications have radically altered construction techniques in recent years. In order to anticipate how the building will act, perform, and appear, these technologies encompass a wide range of visualization, simulation, and analytic tools. These new technologies and applications have a considerable impact on completing construction projects in today's (architecture, engineering and construction)AEC industries. The rate of changes in BIM-related topics is different worldwide, and it depends on many factors, e.g., the national policies of each country. Therefore, there is a need for comprehensive research focused on a specific area with common characteristics. Therefore, one of the necessary measures to increase the use of this new approach is to examine the challenges and obstacles facing it. In this research, based on the Delphi method, at first, the background and related literature are reviewed. Then, using the knowledge obtained from the literature, a primary questionnaire is generated and filled by experts who are selected using snowball sampling. It covered the experts' attitudes towards implementing BIM in renovation projects and their view of the benefits and obstacles in this regard. By analyzing the primary questionnaire, the second group of experts is selected among the participants to be interviewed. The results are analyzed using Theme analysis. Six themes, including Management support, staff resistance, client willingness, Cost of software and implementation, the difficulty of implementation, and other reasons, are obtained. Then a final questionnaire is generated from the themes and filled by the same group of experts. The result is analyzed by the Fuzzy Delphi method, showing the exact ranking of the obtained themes. The final results show that management support, staff resistance, and client willingness are the most critical barrier to BIM usage in renovation projects.

Keywords: building information modeling, BIM, BIM implementation, BIM barriers, BIM in renovation

Procedia PDF Downloads 156
720 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 263
719 Damage Mesomodel Based Low-Velocity Impact Damage Analysis of Laminated Composite Structures

Authors: Semayat Fanta, P.M. Mohite, C.S. Upadhyay

Abstract:

Damage meso-model for laminates is one of the most widely applicable approaches for the analysis of damage induced in laminated fiber-reinforced polymeric composites. Damage meso-model for laminates has been developed over the last three decades by many researchers in experimental, theoretical, and analytical methods that have been carried out in micromechanics as well as meso-mechanics analysis approaches. It has been fundamentally developed based on the micromechanical description that aims to predict the damage initiation and evolution until the failure of structure in various loading conditions. The current damage meso-model for laminates aimed to act as a bridge between micromechanics and macro-mechanics of the laminated composite structure. This model considers two meso-constituents for the analysis of damage in ply and interface that imparted from low-velocity impact. The damages considered in this study include fiber breakage, matrix cracking, and diffused damage of the lamina, and delamination of the interface. The damage initiation and evolution in laminae can be modeled in terms of damaged strain energy density using damage parameters and the thermodynamic irreversible forces. Interface damage can be modeled with a new concept of spherical micro-void in the resin-rich zone of interface material. The damage evolution is controlled by the damage parameter (d) and the radius of micro-void (r) from the point of damage nucleation to its saturation. The constitutive martial model for meso-constituents is defined in a user material subroutine VUMAT and implemented in ABAQUS/Explicit finite element modeling tool. The model predicts the damages in the meso-constituents level very accurately and is considered the most effective technique of modeling low-velocity impact simulation for laminated composite structures.

Keywords: mesomodel, laminate, low-energy impact, micromechanics

Procedia PDF Downloads 215
718 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 722
717 Impact of Climate Change on Crop Production: Climate Resilient Agriculture Is the Need of the Hour

Authors: Deepak Loura

Abstract:

Climate change is considered one of the major environmental problems of the 21st century and a lasting change in the statistical distribution of weather patterns over periods ranging from decades to millions of years. Agriculture and climate change are internally correlated with each other in various aspects, as the threat of varying global climate has greatly driven the attention of scientists, as these variations are imparting a negative impact on global crop production and compromising food security worldwide. The fast pace of development and industrialization and indiscriminate destruction of the natural environment, more so in the last century, have altered the concentration of atmospheric gases that lead to global warming. Carbon dioxide (CO₂), methane (CH₄), and nitrous oxide (NO) are important biogenic greenhouse gases (GHGs) from the agricultural sector contributing to global warming and their concentration is increasing alarmingly. Agricultural productivity can be affected by climate change in 2 ways: first, directly, by affecting plant growth development and yield due to changes in rainfall/precipitation and temperature and/or CO₂ levels, and second, indirectly, there may be considerable impact on agricultural land use due to snow melt, availability of irrigation, frequency and intensity of inter- and intra-seasonal droughts and floods, soil organic matter transformations, soil erosion, distribution and frequency of infestation by insect pests, diseases or weeds, the decline in arable areas (due to submergence of coastal lands), and availability of energy. An increase in atmospheric CO₂ promotes the growth and productivity of C3 plants. On the other hand, an increase in temperature, can reduce crop duration, increase crop respiration rates, affect the equilibrium between crops and pests, hasten nutrient mineralization in soils, decrease fertilizer- use efficiencies, and increase evapotranspiration among others. All these could considerably affect crop yield in long run. Climate resilient agriculture consisting of adaptation, mitigation, and other agriculture practices can potentially enhance the capacity of the system to withstand climate-related disturbances by resisting damage and recovering quickly. Climate resilient agriculture turns the climate change threats that have to be tackled into new business opportunities for the sector in different regions and therefore provides a triple win: mitigation, adaptation, and economic growth. Improving the soil organic carbon stock of soil is integral to any strategy towards adapting to and mitigating the abrupt climate change, advancing food security, and improving the environment. Soil carbon sequestration is one of the major mitigation strategies to achieve climate-resilient agriculture. Climate-smart agriculture is the only way to lower the negative impact of climate variations on crop adaptation before it might affect global crop production drastically. To cope with these extreme changes, future development needs to make adjustments in technology, management practices, and legislation. Adaptation and mitigation are twin approaches to bringing resilience to climate change in agriculture.

Keywords: climate change, global warming, crop production, climate resilient agriculture

Procedia PDF Downloads 68
716 Cybersecurity for Digital Twins in the Built Environment: Research Landscape, Industry Attitudes and Future Direction

Authors: Kaznah Alshammari, Thomas Beach, Yacine Rezgui

Abstract:

Technological advances in the construction sector are helping to make smart cities a reality by means of cyber-physical systems (CPS). CPS integrate information and the physical world through the use of information communication technologies (ICT). An increasingly common goal in the built environment is to integrate building information models (BIM) with the Internet of Things (IoT) and sensor technologies using CPS. Future advances could see the adoption of digital twins, creating new opportunities for CPS using monitoring, simulation, and optimisation technologies. However, researchers often fail to fully consider the security implications. To date, it is not widely possible to assimilate BIM data and cybersecurity concepts, and, therefore, security has thus far been overlooked. This paper reviews the empirical literature concerning IoT applications in the built environment and discusses real-world applications of the IoT intended to enhance construction practices, people’s lives and bolster cybersecurity. Specifically, this research addresses two research questions: (a) how suitable are the current IoT and CPS security stacks to address the cybersecurity threats facing digital twins in the context of smart buildings and districts? and (b) what are the current obstacles to tackling cybersecurity threats to the built environment CPS? To answer these questions, this paper reviews the current state-of-the-art research concerning digital twins in the built environment, the IoT, BIM, urban cities, and cybersecurity. The results of these findings of this study confirmed the importance of using digital twins in both IoT and BIM. Also, eight reference zones across Europe have gained special recognition for their contributions to the advancement of IoT science. Therefore, this paper evaluates the use of digital twins in CPS to arrive at recommendations for expanding BIM specifications to facilitate IoT compliance, bolster cybersecurity and integrate digital twin and city standards in the smart cities of the future.

Keywords: BIM, cybersecurity, digital twins, IoT, urban cities

Procedia PDF Downloads 157
715 Seek First to Regulate, Then to Understand: The Case for Preemptive Regulation of Robots

Authors: Catherine McWhorter

Abstract:

Robotics is a fast-evolving field lacking comprehensive and harm-mitigating regulation; it also lacks critical data on how human-robot interaction (HRI) may affect human psychology. As most anthropomorphic robots are intended as substitutes for humans, this paper asserts that the commercial robotics industry should be preemptively regulated at the federal level such that robots capable of embodying a victim role in criminal scenarios (“vicbots”) are prohibited until clinical studies determine their effects on the user and society. The results of these studies should then inform more permanent legislation that strives to mitigate risks of harm without infringing upon fundamental rights or stifling innovation. This paper explores these concepts through the lens of the sex robot industry. The sexbot industry offers some of the most realistic, interactive, and customizable robots for sale today. From approximately 2010 until 2017, some sex robot producers, such as True Companion, actively promoted ‘vicbot’ culture with personalities like “Frigid Farrah” and “Young Yoko” but received significant public backlash for fetishizing rape and pedophilia. Today, “Frigid Farrah” and “Young Yoko” appear to have vanished. Sexbot producers have replaced preprogrammed vicbot personalities in favor of one generic, customizable personality. According to the manufacturer ainidoll.com, when asked, there is only one thing the user won’t be able to program the sexbot to do – “…give you drama”. The ability to customize vicbot personas is possible with today’s generic personality sexbots and may undermine the intent of some current legislative efforts. Current debate on the effects of vicbots indicates a lack of consensus. Some scholars suggest vicbots may reduce the rate of actual sex crimes, and some suggest that vicbots will, in fact, create sex criminals, while others cite their potential for rehabilitation. Vicbots may have value in some instances when prescribed by medical professionals, but the overall uncertainty and lack of data further underscore the need for preemptive regulation and clinical research. Existing literature on exposure to media violence and its effects on prosocial behavior, human aggression, and addiction may serve as launch points for specific studies into the hyperrealism of vicbots. Of course, the customization, anthropomorphism and artificial intelligence of sexbots, and therefore more mainstream robots, will continue to evolve. The existing sexbot industry offers an opportunity to preemptively regulate and to research answers to these and many more questions before this type of technology becomes even more advanced and mainstream. Robots pose complicated moral, ethical, and legal challenges, most of which are beyond the scope of this paper. By examining the possibility for custom vicbots via the sexbots industry, reviewing existing literature on regulation, media violence, and vicbot user effects, this paper strives to underscore the need for preemptive federal regulation prohibiting vicbot capabilities in robots while advocating for further research into the potential for the user and societal harm by the same.

Keywords: human-robot interaction effects, regulation, research, robots

Procedia PDF Downloads 201