Search results for: coda wave velocity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2762

Search results for: coda wave velocity

182 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 73
181 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System

Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu

Abstract:

Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.

Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress

Procedia PDF Downloads 165
180 Ethical Artificial Intelligence: An Exploratory Study of Guidelines

Authors: Ahmad Haidar

Abstract:

The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.

Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI

Procedia PDF Downloads 71
179 Magnetic Navigation in Underwater Networks

Authors: Kumar Divyendra

Abstract:

Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.

Keywords: clustering, deep learning, network backbone, parallel computing

Procedia PDF Downloads 75
178 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 386
177 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 75
176 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants

Authors: Zarina Chokparova, Ighor Uzhinsky

Abstract:

Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.

Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture

Procedia PDF Downloads 123
175 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 265
174 A Systematic Review of Sensory Processing Patterns of Children with Autism Spectrum Disorders

Authors: Ala’a F. Jaber, Bara’ah A. Bsharat, Noor T. Ismael

Abstract:

Background: Sensory processing is a fundamental skill needed for the successful performance of daily living activities. These skills are impaired as parts of the neurodevelopmental process issues among children with autism spectrum disorder (ASD). This systematic review aimed to summarize the evidence on the differences in sensory processing and motor characteristic between children with ASD and children with TD. Method: This systematic review followed the guidelines of the preferred reporting items for systematic reviews and meta-analysis. The search terms included sensory, motor, condition, and child-related terms or phrases. The electronic search utilized Academic Search Ultimate, CINAHL Plus with Full Text, ERIC, MEDLINE, MEDLINE Complete, Psychology, and Behavioral Sciences Collection, and SocINDEX with full-text databases. The hand search included looking for potential studies in the references of related studies. The inclusion criteria included studies published in English between years 2009-2020 that included children aged 3-18 years with a confirmed ASD diagnosis, according to the DSM-V criteria, included a control group of typical children, included outcome measures related to the sensory processing and/or motor functions, and studies available in full-text. The review of included studies followed the Oxford Centre for Evidence-Based Medicine guidelines, and the Guidelines for Critical Review Form of Quantitative Studies, and the guidelines for conducting systematic reviews by the American Occupational Therapy Association. Results: Eighty-eight full-text studies related to the differences between children with ASD and children with TD in terms of sensory processing and motor characteristics were reviewed, of which eighteen articles were included in the quantitative synthesis. The results reveal that children with ASD had more extreme sensory processing patterns than children with TD, like hyper-responsiveness and hypo-responsiveness to sensory stimuli. Also, children with ASD had limited gross and fine motor abilities and lower strength, endurance, balance, eye-hand coordination, movement velocity, cadence, dexterity with a higher rate of gait abnormalities than children with TD. Conclusion: This systematic review provided preliminary evidence suggesting that motor functioning should be addressed in the evaluation and intervention for children with ASD, and sensory processing should be supported among children with TD. More future research should investigate whether how the performance and engagement in daily life activities are affected by sensory processing and motor skills.

Keywords: sensory processing, occupational therapy, children, motor skills

Procedia PDF Downloads 108
173 Economic Impact of Rana Plaza Collapse

Authors: Md. Omar Bin Harun Khan

Abstract:

The collapse of the infamous Rana Plaza, a multi-storeyed commercial building in Savar, near Dhaka, Bangladesh has brought with it a plethora of positive and negative consequences. Bangladesh being a key player in the export of clothing, found itself amidst a wave of economic upheaval following this tragic incident that resulted in numerous Bangladeshis, most of whom were factory workers. This paper compares the consequences that the country’s Ready Made Garments (RMG) sector is facing now, two years into the incident. The paper presents a comparison of statistical data from study reports and brings forward perspectives from all dimensions of Labour, Employment and Industrial Relations in Bangladesh following the event. The paper brings across the viewpoint of donor organizations and donor countries, the impacts of several initiatives taken by foreign organizations like the International Labour Organization, and local entities like the Bangladesh Garment Manufacturers and Exporters Association (BGMEA) in order to reinforce compliance and stabilize the shaky foundation that the RMG sector had found itself following the collapse. Focus of the paper remains on the stance taken by the suppliers in Bangladesh, with inputs from buying houses and factories, and also on the reaction of foreign brands. The paper also focuses on the horrific physical, mental and financial implications sustained by the victims and their families, and the consequent uproar from workers in general regarding compliance with work safety and workers’ welfare conditions. The purpose is to get across both sides of the scenario: the economic impact that suppliers / factories/ sellers/ buying houses/exporters have faced in Bangladesh as a result of complete loss of reliability on them regarding working standards; and also to cover the aftershock felt on the other end of the spectrum by the importers/ buyers, particularly the foreign entities, in terms of the sudden accountability of being affiliated with non- compliant factories. The collapse of Rana Plaza has received vast international attention and strong criticism. Nevertheless, the almost immediate strengthening of labourrights and the wholesale reform undertaken on all sides of the supply chain, evidence a move of all local and foreign stakeholders towards greater compliance and taking of precautionary steps for prevention of further disasters. The tragedy that Rana Plaza embodies served as a much-needed epiphany for the soaring RMG Sector of Bangladesh. Prompt co-operation on the part of all stakeholders and regulatory bodies now show a move towards sustainable development, which further ensures safeguarding against any future irregularities and pave the way for steady economic growth.

Keywords: economy, employment standards, Rana Plaza, RMG

Procedia PDF Downloads 310
172 A Comparative Human Rights Analysis of the Securitization of Migration in the Fight against Terrorism in Europe: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

The last quarter of the twentieth century was characterized by the emergence of a new kind of terrorism: religiously-inspired terrorism. Islam finds itself at the heart of this new wave, considering the number of international attacks committed by Islamic-inspired perpetrators. With religiously inspired terrorism as an operating framework, governments increasingly rely on immigration law to counter such terrorism. Immigration law seems particularly useful because its core task consists of keeping ‘unwanted’ people out. Islamic terrorists more often than not have an immigrant background and will be subject to immigration law. As a result, immigration law becomes more and more ‘securitized’. The European migration crisis has reinforced this trend. The research explores the human rights consequences of immigration law’s securitization in Europe. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues but respond very differently to them. The United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand also introduced restrictions to its immigration policy but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This contribution evaluates the situation in Belgium. Through a series of legislative changes, the Belgian parliament (i) greatly expanded the possibilities of expelling foreign nationals for (vaguely defined) reasons of ‘national security’; (ii) abolished almost all procedural protection associated with this decision (iii) broadened, as an extra security measure, the possibility of depriving individuals condemned of terrorism of their Belgian nationality. Measures such as these are obviously problematic from a human rights perspective; they jeopardize the principle of legality, the presumption of innocence, the right to protection of private and family life and the prohibition on torture. Moreover, this contribution also raises questions about the efficacy of immigration law’s suitability as a counterterrorism instrument. Is it a legitimate step, considering the type of terrorism we face today? Or, is it merely a strategic move, considering the broader maneuvering space immigration law offers and the lack of political resistance governments receive when infringing the rights of foreigners? Even more so, figures demonstrate that today’s terrorist threat does not necessarily stem from outside our borders. Does immigration law then still absorb - if it has ever done so (completely) - the threat? The study’s goal is to critically assess, from a human rights perspective, the counterterrorism strategies European governments have adopted. As most governments adopt a variation of the same core concepts, the study’s findings will hold true even beyond the four countries addressed.

Keywords: Belgium, counterterrorism strategies, human rights, immigration law

Procedia PDF Downloads 90
171 Evaluating the Teaching and Learning Value of Tablets

Authors: Willem J. A. Louw

Abstract:

The wave of new advanced computing technology that has been developed during the recent past has significantly changed the way we communicate, collaborate and collect information. It has created a new technology environment and paradigm in which our children and students grow-up and this impacts on their learning. Research confirmed that Generation Y students have a preference for learning in the new technology environment. The challenge or question is: How do we adjust our teaching and learning to make the most of these changes. The complexity of effective and efficient teaching and learning must not be underestimated and changes must be preceded by proper objective research to prevent any haphazard developments that could do more harm than benefit. A blended learning approach has been used in the Forestry department for a few numbers of years including the use of electronic-peer assisted learning (e-pal) in a fixed-computer set-up within a learning management system environment. It was decided to extend the investigation and do some exploratory research by using a range of different Tablet devices. For this purpose, learning activities or assignments were designed to cover aspects of communication, collaboration and collection of information. The Moodle learning management system was used to present normal module information, to communicate with students and for feedback and data collection. Student feedback was collected by using an online questionnaire and informal discussions. The research project was implemented in 2013, 2014 and 2015 amongst first and third-year students doing a forestry three-year technical tertiary qualification in commercial plantation management. In general, more than 80% of the students alluded to that the device was very useful in their learning environment while the rest indicated that the devices were not very useful. More than ninety percent of the students acknowledged that they would like to continue using the devices for all of their modules whilst the rest alluded to functioning efficiently without the devices. Results indicated that information collection (access to resources) was rated the highest advantageous factor followed by communication and collaboration. The main general advantages of using Tablets were listed by the students as being mobility (portability), 24/7 access to learning material and information of any kind on a user friendly device in a Wi-Fi environment, fast computing process speeds, saving time, effort and airtime through skyping and e-mail, and use of various applications. Ownership of the device is a critical factor while the risk was identified as a major potential constraint. Significant differences were reported between the different types and quality of Tablets. The preferred types are those with a bigger screen and the ones with overall better functionality and quality features. Tablets significantly increase the collaboration, communication and information collection needs of the students. It does, however, not replace the need of a computer/laptop because of limited storage and computation capacity, small screen size and inefficient typing.

Keywords: tablets, teaching, blended learning, tablet quality

Procedia PDF Downloads 229
170 Liquid Food Sterilization Using Pulsed Electric Field

Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas

Abstract:

Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.

Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch

Procedia PDF Downloads 153
169 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration

Authors: Cleyton S. Stampa

Abstract:

This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.

Keywords: flat slab, heat storing, pure metal, solid subcooling

Procedia PDF Downloads 117
168 Gait Analysis in Total Knee Arthroplasty

Authors: Neeraj Vij, Christian Leber, Kenneth Schmidt

Abstract:

Introduction: Total knee arthroplasty is a common procedure. It is well known that the biomechanics of the knee do not fully return to their normal state. Motion analysis has been used to study the biomechanics of the knee after total knee arthroplasty. The purpose of this scoping review is to summarize the current use of gait analysis in total knee arthroplasty and to identify the preoperative motion analysis parameters for which a systematic review aimed at determining the reliability and validity may be warranted. Materials and Methods: This IRB-exempt scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist strictly. Five search engines were searched for a total of 279 articles. Articles underwent a title and abstract screening process followed by full-text screening. Included articles were placed in the following sections: the role of gait analysis as a research tool for operative decisions, other research applications for motion analysis in total knee arthroplasty, gait analysis as a tool in predicting radiologic outcomes, gait analysis as a tool in predicting clinical outcomes. Results: Eleven articles studied gait analysis as a research tool in studying operative decisions. Motion analysis is currently used to study surgical approaches, surgical techniques, and implant choice. Five articles studied other research applications for motion analysis in total knee arthroplasty. Other research applications for motion analysis currently include studying the role of the unicompartmental knee arthroplasty and novel physical therapy protocols aimed at optimizing post-operative care. Two articles studied motion analysis as a tool for predicting radiographic outcomes. Preoperative gait analysis has identified parameters than can predict postoperative tibial component migration. 15 articles studied motion analysis in conjunction with clinical scores. Conclusions: There is a broad range of applications within the research domain of total knee arthroplasty. The potential application is likely larger. However, the current literature is limited by vague definitions of ‘gait analysis’ or ‘motion analysis’ and a limited number of articles with preoperative and postoperative functional and clinical measures. Knee adduction moment, knee adduction impulse, total knee range of motion, varus angle, cadence, stride length, and velocity have the potential for integration into composite clinical scores. A systematic review aimed at determining the validity, reliability, sensitivities, and specificities of these variables is warranted.

Keywords: motion analysis, joint replacement, patient-reported outcomes, knee surgery

Procedia PDF Downloads 71
167 Investigation of Unusually High Ultrasonic Signal Attenuation in Water Observed in Various Combinations of Pairs of Lead Zirconate Titanate Pb(ZrxTi1-x)O3 (PZT) Piezoelectric Ceramics Positioned Adjacent to One Another Separated by an Intermediate Gap

Authors: S. M. Mabandla, P. Loveday, C. Gomes, D. T. Maiga, T. T. Phadi

Abstract:

Lead zirconate titanate (PZT) piezoelectric ceramics are widely used in ultrasonic applications due to their ability to effectively convert electrical energy into mechanical vibrations and vice versa. This paper presents a study on the behaviour of various combinations of pairs of PZT piezoelectric ceramic materials positioned adjacent to each other with an intermediate gap submerged in water, where one piezoelectric ceramic material is excited by a cyclic electric field with constant frequency and amplitude displacement. The transmitted ultrasonic sound propagates through the medium and is received by the PZT ceramic at the other end, the ultrasonic sound signal amplitude displacement experiences attenuation during propagation due to acoustic impedance. The investigation focuses on understanding the causes of extremely high amplitude displacement attenuation that have been observed in various combinations of piezoelectric ceramic pairs that are submerged in water arranged in a manner stipulated earlier. by examining various combinations of pairs of these piezoelectric ceramics, their physical, electrical, and acoustic properties, and behaviour and attributing them to the observed significant signal attenuation. The experimental setup involves exciting one piezoelectric ceramic material at one end with a burst square cyclic electric field signal of constant frequency, which generates a burst of ultrasonic sound that propagates through the water medium to the adjacent piezoelectric ceramic at the other end. Mechanical vibrations of a PZT piezoelectric ceramic are measured using a double-beam laser Doppler vibrometer to mimic the incident ultrasonic waves generated and received ultrasonic waves on the other end due to mechanical vibrations of a PZT. The measured ultrasonic sound wave signals are continuously compared to the applied cyclic electric field at both ends. The impedance matching networks are continuously tuned at both ends to eliminate electromechanical impedance mismatch to improve ultrasonic transmission and reception. The study delves into various physical, electrical, and acoustic properties of the PZT piezoelectric ceramics, such as the electromechanical coupling factor, acoustic coupling, and elasticity, among others. These properties are analyzed to identify potential factors contributing to the unusually high acoustic impedance in the water medium between the ceramics. Additionally, impedance-matching networks are investigated at both ends to offset the high signal attenuation and improve overall system performance. The findings will be reported in this paper.

Keywords: acoustic impedance, impedance mismatch, piezoelectric ceramics, ultrasonic sound

Procedia PDF Downloads 58
166 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 133
165 Chemical Technology Approach for Obtaining Carbon Structures Containing Reinforced Ceramic Materials Based on Alumina

Authors: T. Kuchukhidze, N. Jalagonia, T. Archuadze, G. Bokuchava

Abstract:

The growing scientific-technological progress in modern civilization causes actuality of producing construction materials which can successfully work in conditions of high temperature, radiation, pressure, speed, and chemically aggressive environment. Such extreme conditions can withstand very few types of materials and among them, ceramic materials are in the first place. Corundum ceramics is the most useful material for creation of constructive nodes and products of various purposes for its low cost, easy accessibility to raw materials and good combination of physical-chemical properties. However, ceramic composite materials have one disadvantage; they are less plastics and have lower toughness. In order to increase the plasticity, the ceramics are reinforced by various dopants, that reduces the growth of the cracks. It is shown, that adding of even small amount of carbon fibers and carbon nanotubes (CNT) as reinforcing material significantly improves mechanical properties of the products, keeping at the same time advantages of alundum ceramics. Graphene in composite material acts in the same way as inorganic dopants (MgO, ZrO2, SiC and others) and performs the role of aluminum oxide inhibitor, as it creates shell, that gives possibility to reduce sintering temperature and at the same time it acts as damper, because scattering of a shock wave takes place on carbon structures. Application of different structural modification of carbon (graphene, nanotube and others) as reinforced material, gives possibility to create multi-purpose highly requested composite materials based on alundum ceramics. In the present work offers simplified technology for obtaining of aluminum oxide ceramics, reinforced with carbon nanostructures, during which chemical modification with doping carbon nanostructures will be implemented in the process of synthesis of final powdery composite – Alumina. In charge doping carbon nanostructures connected to matrix substance with C-O-Al bonds, that provide their homogeneous spatial distribution. In ceramic obtained as a result of consolidation of such powders carbon fragments equally distributed in the entire matrix of aluminum oxide, that cause increase of bending strength and crack-resistance. The proposed way to prepare the charge simplifies the technological process, decreases energy consumption, synthesis duration and therefore requires less financial expenses. In the implementation of this work, modern instrumental methods were used: electronic and optical microscopy, X-ray structural and granulometric analysis, UV, IR, and Raman spectroscopy.

Keywords: ceramic materials, α-Al₂O₃, carbon nanostructures, composites, characterization, hot-pressing

Procedia PDF Downloads 100
164 Geosynthetic Containment Systems for Coastal Protection: An Indian Perspective

Authors: Tom Elias, Kiran G. Shirlal

Abstract:

Coastal erosion is one of the major issue faced by maritime countries, globally. More than 1200 km stretch of Indian coastline is marked eroding. There have been numerous attempts to impede the erosion rate and to attain equilibrium beach profiles. High cost and unavailability of natural rocks forced coastal engineers to find alternatives for conventional hard options like seawalls and groynes. Geosynthetic containment systems, emerged in the mid 20th century proved promising in catering coastal protection in countries like Australia, Germany and United States. The present study aims at reviewing Indian timeline of protection works that uses geosynthetic containment systems. Indian exploration regarding geosynthetic containment system dates back to early 2000s. Generally, protection structures use geosynthetics in the form of Geotubes, Geocontainers, and Geobags with Geotubes being most widely used in the form of submerged reefs, seawalls, groynes and breakwaters. Sand and dredged waste are used to fill these containment systems with calculated sand fill ratio. Reviewing the prominent protection works constructed in the east and west coast of India provides an insight into benefits and the difficulties faced by the practical installation. Initially, geosynthetic structures were considered as a temporary protection method prior to the construction of some other hard structure. Later Dahanu, Hamala and Pentha experiences helped in establishing geotubes as an alternative to conventional structures. Nearshore geotubes reefs aimed to attain equilibrium beach served its purpose in Hamala and Dahanu, Maharashtra, while reef constructed at Candolim, Goa underwent serious damage due to Toe Scour. In situ filling by pumping of sand slurry as in case of Shankarpur Seawall, West Bengal remains as a major concern. Geosynthetic systems supplemented by gabions and rock armours improves the wave dissipation, stability and reflection characteristics as implied in Pentha Coast, Odisha, Hazira, Gujarat and Uppada, Andhra Pradesh. Keeping improper design and deliberate destruction by vandals apart, geosynthetic containment systems offer a cost-effective alternative to conventional coastal protection methods in India. Additionally, geosynthetics supports marine growth in its surface which enhances its demand as an eco-friendly material and encourages usage.

Keywords: coastal protection, geotubes, geobags, geocontainers

Procedia PDF Downloads 131
163 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 262
162 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 115
161 Development of a Mobile APP for Establishing Thermal Sensation Maps using Citizen Participation

Authors: Jeong-Min Son, Jeong-Hee Eum, Jin-Kyu Min, Uk-Je Sung, Ju-Eun Kim

Abstract:

While various environmental problems are severe due to climate change, especially in cities where population and development are concentrated, urban thermal environment problems such as heat waves and tropical nights are particularly worsening. Accordingly, the Korean government provides basic data related to the urban thermal environment to support each local government in effectively establishing policies to cope with heat waves. However, the basic data related to the thermal environment provided by the government has limitations in establishing a regional thermal adaptation plan with a minimum unit of cities, counties, and districts. In addition, the urban heat environment perceived by people differs in each region and space. Therefore, it is necessary to prepare practical measures that can be used to establish regional-based policies for heat wave adaptation by identifying people’s heat perception in the entire city. This study aims to develop a mobile phone application (APP) to gather people’s thermal sensation information and create Korea’s first thermal map based on this information. In addition, through this APP, citizens directly propose thermal adaptation policies, and urban planners and policymakers accept citizens' opinions, so this study provides a tool to solve local thermal environment problems. To achieve this purpose, first, the composition and contents of the app were discussed by examining various existing apps and cases for citizen participation and collection of heat information. In addition, factors affecting human thermal comfort, such as spatial, meteorological, and demographic factors, were investigated to construct the APP system. Based on these results, the basic version of the APP was developed. Second, the living lab methodology was adopted to gather people’s heat perception using the developed app to conduct overall evaluation and feedback of people on the APP. The people participating in the living lab were selected as those living in Daegu Metropolitan City, which is located in South Korea and annually records high temperatures. The user interface was improved through the living lab to make the app easier to use and the thermal map was modified. This study expects to establish high-resolution thermal maps for effective policies and measures and to solve local thermal environmental problems using the APP. The collected information can be used to evaluate spatial, meteorological, and demographic characteristics that affect the perceived heat of citizens. In addition, it is expected that the research can be expanded by gathering thermal information perceived by citizens of foreign cities as well as other cities in South Korea through the APP developed in this study.

Keywords: mobile application, living lab, thermal map, climate change adaptation

Procedia PDF Downloads 66
160 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 280
159 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure

Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu

Abstract:

In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.

Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency

Procedia PDF Downloads 158
158 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada

Authors: S. Chowdhury, A. Corlett

Abstract:

Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.

Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation

Procedia PDF Downloads 111
157 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection

Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili

Abstract:

The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).

Keywords: EM induction sensing, detector, plastic mines, remote sensing

Procedia PDF Downloads 128
156 Post-Soviet LULC Analysis of Tbilisi, Batumi and Kutaisi Using of Remote Sensing and Geo Information System

Authors: Lela Gadrani, Mariam Tsitsagi

Abstract:

Human is a part of the urban landscape and responsible for it. Urbanization of cities includes the longest phase; thus none of the environment ever undergoes such anthropogenic impact as the area of large cities. The post-Soviet period is very interesting in terms of scientific research. The changes that have occurred in the cities since the collapse of the Soviet Union have not yet been analyzed best to our knowledge. In this context, the aim of this paper is to analyze the changes in the land use of the three large cities of Georgia (Tbilisi, Kutaisi, Batumi). Tbilisi as a capital city, Batumi as a port city, and Kutaisi as a former industrial center. Data used during the research process are conventionally divided into satellite and supporting materials. For this purpose, the largest topographic maps (1:10 000) of all three cities were analyzed, Tbilisi General Plans (1896, 1924), Tbilisi and Kutaisi historical maps. The main emphasis was placed on the classification of Landsat images. In this case, we have classified the images LULC (LandUse / LandCover) of all three cities taken in 1987 and 2016 using the supervised and unsupervised methods. All the procedures were performed in the programs: Arc GIS 10.3.1 and ENVI 5.0. In each classification we have singled out the following classes: built-up area, water bodies, agricultural lands, green cover and bare soil, and calculated the areas occupied by them. In order to check the validity of the obtained results, additionally we used the higher resolution images of CORONA and Sentinel. Ultimately we identified the changes that took place in the land use in the post-Soviet period in the above cities. According to the results, a large wave of changes touched Tbilisi and Batumi, though in different periods. It turned out that in the case of Tbilisi, the area of developed territory has increased by 13.9% compared to the 1987 data, which is certainly happening at the expense of agricultural land and green cover, in particular, the area of agricultural lands has decreased by 4.97%; and the green cover by 5.67%. It should be noted that Batumi has obviously overtaken the country's capital in terms of development. With the unaided eye it is clear that in comparison with other regions of Georgia, everything is different in Batumi. In fact, Batumi is an unofficial summer capital of Georgia. Undoubtedly, Batumi’s development is very important both in economic and social terms. However, there is a danger that in the uneven conditions of urban development, we will eventually get a developed center - Batumi, and multiple underdeveloped peripheries around it. Analysis of the changes in the land use is of utmost importance not only for quantitative evaluation of the changes already implemented, but for future modeling and prognosis of urban development. Raster data containing the classes of land use is an integral part of the city's prognostic models.

Keywords: analysis, geo information system, remote sensing, LULC

Procedia PDF Downloads 434
155 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition

Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed

Abstract:

Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.

Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil

Procedia PDF Downloads 318
154 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources

Authors: Annisa Ulfah Pristya, Andi Setiawan

Abstract:

Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.

Keywords: CNT, efficiency, electric, microorganisms, sediment

Procedia PDF Downloads 387
153 Dietary Flaxseed Decreases Central Blood Pressure and the Concentrations of Plasma Oxylipins Associated with Hypertension in Patients with Peripheral Arterial Disease

Authors: Stephanie PB Caligiuri, Harold M Aukema, Delfin Rodriguez-Leyva, Amir Ravandi, Randy Guzman, Grant N. Pierce

Abstract:

Background: Hypertension leads to cardiac and cerebral events and therefore is the leading risk factor attributed to death in the world. Oxylipins may be mediators in these events as they can regulate vascular tone and inflammation. Oxylipins are derived from fatty acids. Dietary flaxseed is rich in the n3 fatty acid, alpha-linolenic acid, and, therefore, may have the ability to change the substrate profile of oxylipins. As a result, this could alter blood pressure. Methods: A randomized, double-blinded, controlled clinical trial, the Flax-PAD trial, was used to assess the impact of dietary flaxseed on blood pressure (BP), and to also assess the relationship of plasma oxylipins to BP in 81 patients with peripheral arterial disease (PAD). Patients with PAD were chosen for the clinical trial as they are at an increased risk for hypertension and cardiac and cerebral events. Thirty grams of ground flaxseed were added to food products to consume on a daily basis for 6 months. The control food products contained wheat germ, wheat bran, and mixed dietary oils instead of flaxseed. Central BP, which is more significantly associated to organ damage, cardiac, and cerebral events versus brachial BP, was measured by pulse wave analysis at baseline and 6 months. A plasma profile of 43 oxylipins was generated using solid phase extraction, HPLC-MS/MS, and stable isotope dilution quantitation. Results: At baseline, the central BP (systolic/diastolic) in the placebo and flaxseed group were, 131/73 ± 2.5/1.4 mmHg and 128/71 ± 2.6/1.4 mmHg, respectively. After 6 months of intervention, the flaxseed group exhibited a decrease in blood pressure of 4.0/1.0 mmHg. The 6 month central BP in the placebo and flaxseed groups were, 132/74 ± 2.9/1.8 mmHg and 124/70 ± 2.6/1.6 mmHg (P<0.05). Correlation and logistic regression analyses between central blood pressure and oxylipins were performed. Significant associations were observed between central blood pressure and 17 oxylipins, primarily produced from arachidonic acid. Every 1 nM increase in 16-hydroxyeicosatetraenoic acid (HETE) increased the odds of having high central systolic BP by 15-fold, of having high central diastolic BP by 6-fold and of having high central mean arterial pressure by 15-fold. In addition, every 1 nM increase in 5,6-dihydroxyeicosatrienoic acid (DHET) and 11,12-DHET increased the odds of having high central mean arterial pressure by 45- and 18-fold, respectively. Flaxseed induced a significant decrease in these as well as 4 other vasoconstrictive oxylipins. Conclusion: Dietary flaxseed significantly lowered blood pressure in patients with PAD and hypertension. Plasma oxylipins were strongly associated with central blood pressure and may have mediated the flaxseed-induced decrease in blood pressure.

Keywords: hypertension, flaxseed, oxylipins, peripheral arterial disease

Procedia PDF Downloads 441