Search results for: electromagnetic compatibility measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3404

Search results for: electromagnetic compatibility measurement

3014 Vortex Separator for More Accurate Air Dry-Bulb Temperature Measurement

Authors: Ahmed N. Shmroukh, I. M. S. Taha, A. M. Abdel-Ghany, M. Attalla

Abstract:

Fog systems application for cooling and humidification is still limited, although these systems require less initial cost compared with that of other cooling systems such as pad-and-fan systems. The undesirable relative humidity and air temperature inside the space which have been cooled or humidified are the main reasons for its limited use, which results from the poor control of fog systems. Any accurate control system essentially needs air dry bulb temperature as an input parameter. Therefore, the air dry-bulb temperature in the space needs to be measured accurately. The Scope of the present work is the separation of the fog droplets from the air in a fogged space to measure the air dry bulb temperature accurately. The separation is to be done in a small device inside which the sensor of the temperature measuring instrument is positioned. Vortex separator will be designed and used. Another reference device will be used for measuring the air temperature without separation. A comparative study will be performed to reach at the best device which leads to the most accurate measurement of air dry bulb temperature. The results showed that the proposed devices improved the measured air dry bulb temperature toward the correct direction over that of the free junction. Vortex device was the best. It respectively increased the temperature measured by the free junction in the range from around 2 to around 6°C for different fog on-off duration.

Keywords: fog systems, measuring air dry bulb temperature, temperature measurement, vortex separator

Procedia PDF Downloads 283
3013 Counting People Utilizing Space-Time Imagery

Authors: Ahmed Elmarhomy, K. Terada

Abstract:

An automated method for counting passerby has been proposed using virtual-vertical measurement lines. Space-time image is representing the human regions which are treated using the segmentation process. Different color space has been used to perform the template matching. A proper template matching has been achieved to determine direction and speed of passing people. Distinguish one or two passersby has been investigated using a correlation between passerby speed and the human-pixel area. Finally, the effectiveness of the presented method has been experimentally verified.

Keywords: counting people, measurement line, space-time image, segmentation, template matching

Procedia PDF Downloads 440
3012 The Logistics Equation and Fractal Dimension in Escalators Operations

Authors: Ali Albadri

Abstract:

The logistics equation has never been used or studied in scientific fields outside the field of ecology. It has never been used to understand the behavior of a dynamic system of mechanical machines, like an escalator. We have studied the compatibility of the logistic map against real measurements from an escalator. This study has proven that there is good compatibility between the logistics equation and the experimental measurements. It has discovered the potential of a relationship between the fractal dimension and the non-linearity parameter, R, in the logistics equation. The fractal dimension increases as the R parameter (non-linear parameter) increases. It implies that the fractal dimension increases as the phase of the life span of the machine move from the steady/stable phase to the periodic double phase to a chaotic phase. The fractal dimension and the parameter R can be used as a tool to verify and check the health of machines. We have come up with a theory that there are three areas of behaviors, which they can be classified during the life span of a machine, a steady/stable stage, a periodic double stage, and a chaotic stage. The level of attention to the machine differs depending on the stage that the machine is in. The rate of faults in a machine increases as the machine moves through these three stages. During the double period and the chaotic stages, the number of faults starts to increase and become less predictable. The rate of predictability improves as our monitoring of the changes in the fractal dimension and the parameter R improves. The principles and foundations of our theory in this work have and will have a profound impact on the design of systems, on the way of operation of systems, and on the maintenance schedules of the systems. The systems can be mechanical, electrical, or electronic. The discussed methodology in this paper will give businesses the chance to be more careful at the design stage and planning for maintenance to control costs. The findings in this paper can be implied and used to correlate the three stages of a mechanical system to more in-depth mechanical parameters like wear and fatigue life.

Keywords: logistcs map, bifurcation map, fractal dimension, logistics equation

Procedia PDF Downloads 90
3011 Near Infrared Spectrometry to Determine the Quality of Milk, Experimental Design Setup and Chemometrics: Review

Authors: Meghana Shankara, Priyadarshini Natarajan

Abstract:

Infrared (IR) spectroscopy has revolutionized the way we look at materials around us. Unraveling the pattern in the molecular spectra of materials to analyze the composition and properties of it has been one of the most interesting challenges in modern science. Applications of the IR spectrometry are numerous in the field’s pharmaceuticals, health, food and nutrition, oils, agriculture, construction, polymers, beverage, fabrics and much more limited only by the curiosity of the people. Near Infrared (NIR) spectrometry is applied robustly in analyzing the solids and liquid substances because of its non-destructive analysis method. In this paper, we have reviewed the application of NIR spectrometry in milk quality analysis and have presented the modes of measurement applied in NIRS measurement setup, Design of Experiment (DoE), classification/quantification algorithms used in the case of milk composition prediction like Fat%, Protein%, Lactose%, Solids Not Fat (SNF%) along with different approaches for adulterant identification. We have also discussed the important NIR ranges for the chosen milk parameters. The performance metrics used in the comparison of the various Chemometric approaches include Root Mean Square Error (RMSE), R^2, slope, offset, sensitivity, specificity and accuracy

Keywords: chemometrics, design of experiment, milk quality analysis, NIRS measurement modes

Procedia PDF Downloads 254
3010 Assessment of Cytogenetic Damage as a Function of Radiofrequency Electromagnetic Radiations Exposure Measured by Electric Field Strength: A Gender Based Study

Authors: Ramanpreet, Gursatej Gandhi

Abstract:

Background: Dependence on electromagnetic radiations involved in communication and information technologies has incredibly increased in the personal and professional world. Among the numerous radiations, sources are fixed site transmitters, mobile phone base stations, and power lines beside indoor devices like cordless phones, WiFi, Bluetooth, TV, radio, microwave ovens, etc. Rather there is the continuous emittance of radiofrequency radiations (RFR) even to those not using the devices from mobile phone base stations. The consistent and widespread usage of wireless devices has build-up electromagnetic fields everywhere. In fact, the radiofrequency electromagnetic field (RF-EMF) has insidiously become a part of the environment and like any contaminant may pose to be health-hazardous requiring assessment. Materials and Methods: In the present study, cytogenetic damage was assessed using the Buccal Micronucleus Cytome (BMCyt) assay as a function of radiation exposure after Institutional Ethics Committee clearance of the study and written voluntary informed consent from the participants. On a pre-designed questionnaire, general information lifestyle patterns (diet, physical activity, smoking, drinking, use of mobile phones, internet, Wi-Fi usage, etc.) genetic, reproductive (pedigrees) and medical histories were recorded. For this, 24 hour-personal exposimeter measurements (PEM) were recorded for unrelated 60 healthy adults (40 cases residing in the vicinity of mobile phone base stations since their installation and 20 controls residing in areas with no base stations). The personal exposimeter collects information from all the sources generating EMF (TETRA, GSM, UMTS, DECT, and WLAN) as total RF-EMF uplink and downlink. Findings: The cases (n=40; 23-90 years) and the controls (n=20; 19-65 years) matched for alcohol drinking, smoking habits, and mobile and cordless phone usage. The PEM in cases (149.28 ± 8.98 mV/m) revealed significantly higher (p=0.000) electric field strength compared to the recorded value (80.40 ± 0.30 mV/m) in controls. The GSM 900 uplink (p=0.000), GSM 1800 downlink (p=0.000),UMTS (both uplink; p=0.013 and downlink; p=0.001) and DECT (p=0.000) electric field strength were significantly elevated in the cases as compared to controls. The electric field strength in the cases was significantly from GSM1800 (52.26 ± 4.49mV/m) followed by GSM900 (45.69 ± 4.98mV/m), UMTS (25.03 ± 3.33mV/m), DECT (18.02 ± 2.14mV/m) and was least from WLAN (8.26 ± 2.35mV/m). The higher significantly (p=0.000) increased exposure to the cases was from GSM (97.96 ± 6.97mV/m) in comparison to UMTS, DECT, and WLAN. The frequencies of micronuclei (1.86X, p=0.007), nuclear buds (2.95X, p=0.002) and cell death parameter (condensed chromatin cells) were significantly (1.75X, p=0.007) elevated in cases compared to that in controls probably as a function of radiofrequency radiation exposure. Conclusion: In the absence of other exposure(s), any cytogenetic damage if unrepaired is a cause of concern as it can cause malignancy. Larger sample size with the clinical assessment will prove more insightful of such an effect.

Keywords: Buccal micronucleus cytome assay, cytogenetic damage, electric field strength, personal exposimeter

Procedia PDF Downloads 149
3009 Evaluating Contextually Targeted Advertising with Attention Measurement

Authors: John Hawkins, Graham Burton

Abstract:

Contextual targeting is a common strategy for advertising that places marketing messages in media locations that are expected to be aligned with the target audience. There are multiple major challenges to contextual targeting: the ideal categorisation scheme needs to be known, as well as the most appropriate subsections of that scheme for a given campaign or creative. In addition, the campaign reach is typically limited when targeting becomes narrow, so a balance must be struck between requirements. Finally, refinement of the process is limited by the use of evaluation methods that are either rapid but non-specific (click through rates), or reliable but slow and costly (conversions or brand recall studies). In this study we evaluate the use of attention measurement as a technique for understanding the performance of targeting on the basis of specific contextual topics. We perform the analysis using a large scale dataset of impressions categorised using the iAB V2.0 taxonomy. We evaluate multiple levels of the categorisation hierarchy, using categories at different positions within an initial creative specific ranking. The results illustrate that measuring attention time is an affective signal for the performance of a specific creative within a specific context. Performance is sustained across a ranking of categories from one period to another.

Keywords: contextual targeting, digital advertising, attention measurement, marketing performance

Procedia PDF Downloads 95
3008 Nano-Enhanced In-Situ and Field Up-Gradation of Heavy Oil

Authors: Devesh Motwani, Ranjana S. Baruah

Abstract:

The prime incentive behind up gradation of heavy oil is to increase its API gravity for ease of transportation to refineries, thus expanding the market access of bitumen-based crude to the refineries. There has always been a demand for an integrated approach that aims at simplifying the upgrading scheme, making it adaptable to the production site in terms of economics, environment, and personnel safety. Recent advances in nanotechnology have facilitated the development of two lines of heavy oil upgrading processes that make use of nano-catalysts for producing upgraded oil: In Situ Upgrading and Field Upgrading. The In-Situ upgrading scheme makes use of Hot Fluid Injection (HFI) technique where heavy fractions separated from produced oil are injected into the formations to reintroduce heat into the reservoir along with suspended nano-catalysts and hydrogen. In the presence of hydrogen, catalytic exothermic hydro-processing reactions occur that produce light gases and volatile hydrocarbons which contribute to increased oil detachment from the rock resulting in enhanced recovery. In this way the process is a combination of enhanced heavy oil recovery along with up gradation that effectively handles the heat load within the reservoirs, reduces hydrocarbon waste generation and minimizes the need for diluents. By eliminating most of the residual oil, the Synthetic Crude Oil (SCO) is much easier to transport and more amenable for processing in refineries. For heavy oil reservoirs seriously impacted by the presence of aquifers, the nano-catalytic technology can still be implemented on field though with some additional investments and reduced synergies; however still significantly serving the purpose of production of transportable oil with substantial benefits with respect to both large scale upgrading, and known commercial field upgrading technologies currently on the market. The paper aims to delve deeper into the technology discussed, and the future compatibility.

Keywords: upgrading, synthetic crude oil, nano-catalytic technology, compatibility

Procedia PDF Downloads 395
3007 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 116
3006 The Effect of Relaxing Exercises in Water on Endorphin Hormone for the Beginner in Swimming

Authors: Yasmin Hussein Embaby

Abstract:

Introduction: Athletic Training has its essentials, rules, and methods that help individual in reaching the maximum possible athletic level during the exercised physical activity, therefore; it is important for those working in athletic field to recognize and understand what is going on inside our bodies. This will show the close relationship between physiology and athletic training as the science that explains the various changes that happen to respond to the practice of physical activities. Swimming is one of the water sports that play a major role in influencing the full compatibility of body parts and its systems during the practice of different swimming methods, which uses aqueous to move. It is the initial nucleus in swimming learning and through which the beginner gain a sense of security, safety and the ability to move in aqueous by learning basic skills. Research Methodology: The researcher used the experimental methodology by using pre and post measurement on two equal groups (experimental – control) because it is appropriate for the research. Conclusions: Through the results and information found by the researcher, and in light of the related studies, theoretical readings and the statistical treatments of data; the researcher reached the following conclusions: 1. Muscle relaxation exercises have a positive effect on performance level in crawl swimming and on endorphin hormone as it helps in increasing its normal rater in body, the improvement percentage for experimental group in the relaxation ability, level of endorphin hormone exceeds those of control group. 2. The validity of muscle relaxation exercises proposed for the application, which achieved its objectives, namely increasing the level of endorphin hormone in the body; where research results showed a statistically significant difference in the level of endorphin hormone in favor of the experimental sample.

Keywords: beginners, endorphin hormone, relaxing exercises, swimming

Procedia PDF Downloads 200
3005 Electrical Investigations of Polyaniline/Graphitic Carbon Nitride Composites Using Broadband Dielectric Spectroscopy

Authors: M. A. Moussa, M. H. Abdel Rehim, G.M. Turky

Abstract:

Polyaniline composites with carbon nitride, to overcome compatibility restriction with graphene, were prepared with the solution method. FTIR and Uv-vis spectra were used for structural conformation. While XRD and XPS confirmed the structures in addition to estimation of nitrogen atom surroundings, the pore sizes and the active surface area were determined from BET adsorption isotherm. The electrical and dielectric parameters were measured and calculated with BDS .

Keywords: carbon nitride, dynamic relaxation, electrical conductivity, polyaniline

Procedia PDF Downloads 127
3004 Harmonic Assessment and Mitigation in Medical Diagonesis Equipment

Authors: S. S. Adamu, H. S. Muhammad, D. S. Shuaibu

Abstract:

Poor power quality in electrical power systems can lead to medical equipment at healthcare centres to malfunction and present wrong medical diagnosis. Equipment such as X-rays, computerized axial tomography, etc. can pollute the system due to their high level of harmonics production, which may cause a number of undesirable effects like heating, equipment damages and electromagnetic interferences. The conventional approach of mitigation uses passive inductor/capacitor (LC) filters, which has some drawbacks such as, large sizes, resonance problems and fixed compensation behaviours. The current trends of solutions generally employ active power filters using suitable control algorithms. This work focuses on assessing the level of Total Harmonic Distortion (THD) on medical facilities and various ways of mitigation, using radiology unit of an existing hospital as a case study. The measurement of the harmonics is conducted with a power quality analyzer at the point of common coupling (PCC). The levels of measured THD are found to be higher than the IEEE 519-1992 standard limits. The system is then modelled as a harmonic current source using MATLAB/SIMULINK. To mitigate the unwanted harmonic currents a shunt active filter is developed using synchronous detection algorithm to extract the fundamental component of the source currents. Fuzzy logic controller is then developed to control the filter. The THD without the active power filter are validated using the measured values. The THD with the developed filter show that the harmonics are now within the recommended limits.

Keywords: power quality, total harmonics distortion, shunt active filters, fuzzy logic

Procedia PDF Downloads 468
3003 Assessment of Biofilm Production Capacity of Industrially Important Bacteria under Electroinductive Conditions

Authors: Omolola Ojetayo, Emmanuel Garuba, Obinna Ajunwa, Abiodun A. Onilude

Abstract:

Introduction: Biofilm is a functional community of microorganisms that are associated with a surface or an interface. These adherent cells become embedded within an extracellular matrix composed of polymeric substances, i.e., biofilms refer to biological deposits consisting of both microbes and their extracellular products on biotic and abiotic surfaces. Despite their detrimental effects in medicine, biofilms as natural cell immobilization have found several applications in biotechnology, such as in the treatment of wastewater, bioremediation and biodegradation, desulfurization of gas, and conversion of agro-derived materials into alcohols and organic acids. The means of enhancing immobilized cells have been chemical-inductive, and this affects the medium composition and final product. Physical factors including electrical, magnetic, and electromagnetic flux have shown potential for enhancing biofilms depending on the bacterial species, nature, and intensity of emitted signals, the duration of exposure, and substratum used. However, the concept of cell immobilisation by electrical and magnetic induction is still underexplored. Methods: To assess the effects of physical factors on biofilm formation, six American typed culture collection (Acetobacter aceti ATCC15973, Pseudomonas aeruginosa ATCC9027, Serratia marcescens ATCC14756, Gluconobacter oxydans ATCC19357, Rhodobacter sphaeroides ATCC17023, and Bacillus subtilis ATCC6633) were used. Standard culture techniques for bacterial cells were adopted. Natural autoimmobilisation potentials of test bacteria were carried out by simple biofilms ring formation on tubes, while crystal violet binding assay techniques were adopted in the characterisation of biofilm quantity. Electroinduction of bacterial cells by direct current (DC) application in cell broth, static magnetic field exposure, and electromagnetic flux were carried out, and autoimmobilisation of cells in a biofilm pattern was determined on various substrata tested, including wood, glass, steel, polyvinylchloride (PVC) and polyethylene terephthalate. Biot Savart law was used in quantifying magnetic field intensity, and statistical analyses of data obtained were carried out using the analyses of variance (ANOVA) as well as other statistical tools. Results: Biofilm formation by the selected test bacteria was enhanced by the physical factors applied. Electromagnetic induction had the greatest effect on biofilm formation, with magnetic induction producing the least effect across all substrata used. Microbial cell-cell communication could be a possible means via which physical signals affected the cells in a polarisable manner. Conclusion: The enhancement of biofilm formation by bacteria using physical factors has shown that their inherent capability as a cell immobilization method can be further optimised for industrial applications. A possible relationship between the presence of voltage-dependent channels, mechanosensitive channels, and bacterial biofilms could shed more light on this phenomenon.

Keywords: bacteria, biofilm, cell immobilization, electromagnetic induction, substrata

Procedia PDF Downloads 178
3002 Labour Productivity Measurement and Control Standards for Hotels

Authors: Kristine Joy Simpao

Abstract:

Improving labour productivity is one of the most enthralling and challenging aspects of managing hotels and restaurant business. The demand to secure countless productivity became an increasingly pivotal role of managers to survive and sustain the business. Besides making business profitable, they are in the doom to make every resource to become productive and effective towards achieving company goal while maximizing the value of organization. This paper examines what productivity means to the services industry, in particular, to the hotel industry. This is underpinned by an investigation of the extent of practice of respondent hotels to the labour productivity aspect in the areas of materials management, human resource management and leadership management and in a way, computing the labour productivity ratios using the hotel simple ratios of productivity in order to find a suitable measurement and control standards for hotels with SBMA, Olongapo City as the locale of the study. The finding shows that hotels labour productivity ratings are not perfect with some practices that are far below particularly on strategic and operational decisions in improving performance and productivity of its human resources. It further proves of the no significant difference ratings among the respondent’s type in all areas which indicated that they are having similar perception of the weak implementation of some of the indicators in the labour productivity practices. Furthermore, the results in the computation of labour productivity efficiency ratios resulted relationship of employees versus labour productivity practices are inversely proportional. This study provides a potential measurement and control standards for the enhancement of hotels labour productivity. These standards should also contain labour productivity customized for standard hotels in Subic Bay Freeport Zone to assist hotel owners in increasing the labour productivity while meeting company goals and objectives effectively.

Keywords: labour productivity, hotel, measurement and control, standards, efficiency ratios, practices

Procedia PDF Downloads 304
3001 Nutrition Strategy Using Traditional Tibetan Medicine in the Preventive Measurement

Authors: Ngawang Tsering

Abstract:

Traditional Tibetan medicine is primarily focused on promoting health and keeping away diseases from its unique in prescribing specific diet and lifestyle. The prevalence of chronic diseases has been rising day by day and kills a number of people due to the lack of proper nutritional design in modern times. According to traditional Tibetan medicine, chronic diseases such as diabetes, cancer, cardiovascular diseases, respiratory diseases, and arthritis are heavily associated with an unwholesome diet and inappropriate lifestyles. Diet and lifestyles are the two main conditions of diseases and healthy life. The prevalence of chronic diseases is one of the challenges, with massive economic impact and expensive health issues. Though chronic diseases are challenges, it has a solution in the preventive measurements by using proper nutrition design based on traditional Tibetan medicine. Until today, it is hard to evaluate whether traditional Tibetan medicine nutrition strategy could play a major role in preventive measurement as of the lack of current research evidence. However, compared with modern nutrition, it has an exclusive valuable concept, such as a holistic way and diet or nutrition recommendation based on different aspects. Traditional Tibetan medicine is one of the oldest ancient existing medical systems known as Sowa Rigpa (Science of Healing) highlights different aspects of dietetics and nutrition, namely geographical, seasonal, age, personality, emotional, food combination, the process of individual metabolism, potency, and amount of food. This article offers a critical perspective on the preventive measurement against chronic diseases through nutrition design using traditional Tibetan medicine and also needs attention for a deeper understanding of traditional Tibetan medicine in the modern world.

Keywords: traditional Tibetan medicine, nutrition, chronic diseases, preventive measurement, holistic approach, integrative

Procedia PDF Downloads 143
3000 A Wireless Sensor System for Continuous Monitoring of Particulate Air Pollution

Authors: A. Yawootti, P. Intra, P. Sardyoung, P. Phoosomma, R. Puttipattanasak, S. Leeragreephol, N. Tippayawong

Abstract:

The aim of this work is to design, develop and test the low-cost implementation of a particulate air pollution sensor system for continuous monitoring of outdoors and indoors particulate air pollution at a lower cost than existing instruments. In this study, measuring electrostatic charge of particles technique via high efficiency particulate-free air filter was carried out. The developed detector consists of a PM10 impactor, a particle charger, a Faraday cup electrometer, a flow meter and controller, a vacuum pump, a DC high voltage power supply and a data processing and control unit. It was reported that the developed detector was capable of measuring mass concentration of particulate ranging from 0 to 500 µg/m3 corresponding to number concentration of particulate ranging from 106 to 1012 particles/m3 with measurement time less than 1 sec. The measurement data of the sensor connects to the internet through a GSM connection to a public cellular network. In this development, the apparatus was applied the energy by a 12 V, 7 A internal battery for continuous measurement of about 20 hours. Finally, the developed apparatus was found to be close agreement with the import standard instrument, portable and benefit for air pollution and particulate matter measurements.

Keywords: particulate, air pollution, wireless communication, sensor

Procedia PDF Downloads 353
2999 Heating of the Ions by Electromagnetic Ion Cyclotron (EMIC) Waves Using Magnetospheric Multiscale (MMS) Satellite Observation

Authors: A. A. Abid

Abstract:

The magnetospheric multiscale (MMS) satellite observations in the inner magnetosphere were used to detect the proton band of the electromagnetic ion cyclotron (EMIC) waves on December 14, 2015, which have been significantly contributing to the dynamics of the magnetosphere. It has been examined that the intensity of EMIC waves gradually increases by decreasing the L shell. The waves are triggered by hot proton thermal anisotropy. The low-energy cold protons (ions) can be activated by the EMIC waves when the EMIC wave intensity is high. As a result, these previously invisible protons are now visible. As a result, the EMC waves also excite the helium ions. The EMIC waves, whose frequency in the magnetosphere of the Earth ranges from 0.001 Hz to 5 Hz, have drawn a lot of attention for their ability to carry energy. Since these waves act as a mechanism for the loss of energetic electrons from the Van Allen radiation belt to the atmosphere, therefore, it is necessary to understand how and where they can be produced, as well as the direction of waves along the magnetic field lines. This work examines how the excitation of EMIC waves is affected by the energy of hot proton temperature anisotropy, and It has a minimum resonance energy of 6.9 keV and a range of 7 to 26 keV. On the hot protons, however, the reverse effect can be seen for energies below the minimum resonance energy. It is demonstrated that throughout the energy range of 1 eV to 100 eV, the number density and temperature anisotropy of the protons likewise rise as the intensity of the EMIC waves increases. Key Points: 1. The analysis of EMIC waves produced by hot proton temperature anisotropy using MMS data. 2. The number density and temperature anisotropy of the cold protons increases owing to high-intensity EMIC waves. 3. The cold protons with an energy range of 1-100eV are energized by EMIC waves using the Magnetospheric Multiscale (MMS) satellite not been discussed before

Keywords: EMIC waves, temperature anisotropy of hot protons, energization of the cold proton, magnetospheric multiscale (MMS) satellite observations

Procedia PDF Downloads 102
2998 Air-Coupled Ultrasonic Testing for Non-Destructive Evaluation of Various Aerospace Composite Materials by Laser Vibrometry

Authors: J. Vyas, R. Kazys, J. Sestoke

Abstract:

Air-coupled ultrasonic is the contactless ultrasonic measurement approach which has become widespread for material characterization in Aerospace industry. It is always essential for the requirement of lightest weight, without compromising the durability. To archive the requirements, composite materials are widely used. This paper yields analysis of the air-coupled ultrasonics for composite materials such as CFRP (Carbon Fibre Reinforced Polymer) and GLARE (Glass Fiber Metal Laminate) and honeycombs for the design of modern aircrafts. Laser vibrometry could be the key source of characterization for the aerospace components. The air-coupled ultrasonics fundamentals, including principles, working modes and transducer arrangements used for this purpose is also recounted in brief. The emphasis of this paper is to approach the developed NDT techniques based on the ultrasonic guided waves applications and the possibilities of use of laser vibrometry in different materials with non-contact measurement of guided waves. 3D assessment technique which employs the single point laser head using, automatic scanning relocation of the material to assess the mechanical displacement including pros and cons of the composite materials for aerospace applications with defects and delaminations.

Keywords: air-coupled ultrasonics, contactless measurement, laser interferometry, NDT, ultrasonic guided waves

Procedia PDF Downloads 227
2997 Dipole and Quadrupole Scattering of Ultra Short Pulses on Metal Nanospheres

Authors: Sergey Svita, Valeriy Astapenko

Abstract:

The presentation is devoted to the theoretical analysis of ultrashort electromagnetic pulses (USP) scattering on metallic nanospheres in a dielectric medium in the vicinity of surface plasmon resonance due to excitation of dipole and quadrupole surface plasmons.

Keywords: surface plasmon, scattering, metallic nanosphere

Procedia PDF Downloads 367
2996 A Study of Adaptive Fault Detection Method for GNSS Applications

Authors: Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sebum Chun, Hyung Keun Lee

Abstract:

A purpose of this study is to develop efficient detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive estimation. Due to dependence of radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. Thus, to utilize GNSS for aerospace or ground vehicles requiring high level of safety, unhealthy measurements should be considered seriously. For the reason, this paper proposes adaptive fault detection method to deal with unhealthy measurements in various harsh environments. By the proposed method, the test statistics for fault detection is generated by estimated measurement noise. Pseudorange and carrier-phase measurement noise are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. Performance of the proposed method was evaluated by field-collected GNSS measurements. To evaluate the fault detection capability, intentional faults were added to measurements. The experimental result shows that the proposed detection method is efficient in detecting unhealthy measurements and improves the accuracy of GNSS positioning under fault occurrence.

Keywords: adaptive estimation, fault detection, GNSS, residual

Procedia PDF Downloads 557
2995 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 119
2994 Breath Ethanol Imaging System Using Real Time Biochemical Luminescence for Evaluation of Alcohol Metabolic Capacity

Authors: Xin Wang, Munkbayar Munkhjargal, Kumiko Miyajima, Takahiro Arakawa, Kohji Mitsubayashi

Abstract:

The measurement of gaseous ethanol plays an important role of evaluation of alcohol metabolic capacity in clinical and forensic analysis. A 2-dimensional visualization system for gaseous ethanol was constructed and tested in visualization of breath and transdermal alcohol. We demonstrated breath ethanol measurement using developed high-sensitive visualization system. The concentration of breath ethanol calculated with the imaging signal was significantly different between the volunteer subjects of ALDH2 (+) and (-).

Keywords: breath ethanol, ethnaol imaging, biochemical luminescence, alcohol metabolism

Procedia PDF Downloads 337
2993 Using a GIS-Based Method for Green Infrastructure Accessibility of Different Socio-Economic Groups in Auckland, New Zealand

Authors: Jing Ma, Xindong An

Abstract:

Green infrastructure, the most important aspect of improving the quality of life, has been a crucial element of the liveability measurement. With demanding of more liveable urban environment from increasing population in city area, access to green infrastructure in walking distance should be taken into consideration. This article exemplifies the study on accessibility measurement of green infrastructure in central Auckland (New Zealand), using network analysis tool on the basis of GIS, to verify the accessibility levels of green infrastructure. It analyses the overall situation of green infrastructure and draws some conclusions on the city’s different levels of accessibility according to the categories and facilities distribution, which provides valuable references and guidance for the future facility improvement in planning strategies.

Keywords: quality of life, green infrastructure, GIS, accessibility

Procedia PDF Downloads 267
2992 A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors

Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras

Abstract:

Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).

Keywords: magnetic sensors, vehicle detection, speed measurement, traffic surveillance system

Procedia PDF Downloads 109
2991 Photovoltaic Cells Characteristics Measurement Systems

Authors: Rekioua T., Rekioua D., Aissou S., Ouhabi A.

Abstract:

Power provided by the photovoltaic array varies with solar radiation and temperature, since these parameters influence the electrical characteristic (Ipv-Vpv) of solar cells. In Scientific research, there are different methods to obtain these characteristics. In this paper, we present three methods. A simulation one using Matlab/Simulink. The second one is the standard experimental voltage method and the third one is by using LabVIEW software. This latter is based on an electronic circuit to test PV modules. All details of this electronic schemes are presented and obtained results of the three methods with a comparison and under different meteorological conditions are presented. The proposed method is simple and very efficiency for testing and measurements of electrical characteristic curves of photovoltaic panels.

Keywords: photovoltaic cells, measurement standards, temperature sensors, data acquisition

Procedia PDF Downloads 448
2990 The Colouration of Additive-Manufactured Polymer

Authors: Abisuga Oluwayemisi Adebola, Kerri Akiwowo, Deon de Beer, Kobus Van Der Walt

Abstract:

The convergence of additive manufacturing (AM) and traditional textile dyeing techniques has initiated innovative possibilities for improving the visual application and customization potential of 3D-printed polymer objects. Textile dyeing techniques have progressed to transform fabrics with vibrant colours and complex patterns over centuries. The layer-by-layer deposition characteristic of AM necessitates adaptations in dye application methods to ensure even colour penetration across complex surfaces. Compatibility between dye formulations and polymer matrices influences colour uptake and stability, demanding careful selection and testing of dyes for optimal results. This study investigates the development interaction between these areas, revealing the challenges and opportunities of applying textile dyeing methods to colour 3D-printed polymer materials. The method explores three innovative approaches to colour the 3D-printed polymer object: (a) Additive Manufacturing of a Prototype, (b) the traditional dyebath method, and (c) the contemporary digital sublimation technique. The results show that the layer lines inherent to AM interact with dyes differently and affect the visual outcome compared to traditional textile fibers. Skillful manipulation of textile dyeing methods and dye type used for this research reduced the appearance of these lines to achieve consistency and desirable colour outcomes. In conclusion, integrating textile dyeing techniques into colouring 3D-printed polymer materials connects historical craftsmanship with innovative manufacturing. Overcoming challenges of colour distribution, compatibility, and layer line management requires a holistic approach that blends the technical consistency of AM with the artistic sensitivity of textile dyeing. Hence, applying textile dyeing methods to 3D-printed polymers opens new dimensions of aesthetic and functional possibilities.

Keywords: polymer, 3D-printing, sublimation, textile, dyeing, additive manufacturing

Procedia PDF Downloads 60
2989 Analog Railway Signal Object Controller Development

Authors: Ercan Kızılay, Mustafa Demi̇rel, Selçuk Coşkun

Abstract:

Railway signaling systems consist of vital products that regulate railway traffic and provide safe route arrangements and maneuvers of trains. SIL 4 signal lamps are produced by many manufacturers today. There is a need for systems that enable these signal lamps to be controlled by commands from the interlocking. These systems should act as fail-safe and give error indications to the interlocking system when an unexpected situation occurs for the safe operation of railway systems from the RAMS perspective. In the past, driving and proving the lamp in relay-based systems was typically done via signaling relays. Today, the proving of lamps is done by comparing the current values read over the return circuit, the lower and upper threshold values. The purpose is an analog electronic object controller with the possibility of easy integration with vital systems and the signal lamp itself. During the study, the EN50126 standard approach was considered, and the concept, definition, risk analysis, requirements, architecture, design, and prototyping were performed throughout this study. FMEA (Failure Modes and Effects Analysis) and FTA (Fault Tree) Analysis) have been used for safety analysis in accordance with EN 50129. Concerning these analyzes, the 1oo2D reactive fail-safe hardware design of a controller has been researched. Electromagnetic compatibility (EMC) effects on the functional safety of equipment, insulation coordination, and over-voltage protection were discussed during hardware design according to EN 50124 and EN 50122 standards. As vital equipment for railway signaling, railway signal object controllers should be developed according to EN 50126 and EN 50129 standards which identify the steps and requirements of the development in accordance with the SIL 4(Safety Integrity Level) target. In conclusion of this study, an analog railway signal object controller, which takes command from the interlocking system, is processed in driver cards. Driver cards arrange the voltage level according to desired visibility by means of semiconductors. Additionally, prover cards evaluate the current upper and lower thresholds. Evaluated values are processed via logic gates which are composed as 1oo2D by means of analog electronic technologies. This logic evaluates the voltage level of the lamp and mitigates the risks of undue dimming.

Keywords: object controller, railway electronic, analog electronic, safety, railway signal

Procedia PDF Downloads 87
2988 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 168
2987 Reliability of Dissimilar Metal Soldered Joint in Fabrication of Electromagnetic Interference Shielded Door Frame

Authors: Rehan Waheed, Hasan Aftab Saeed, Wasim Tarar, Khalid Mahmood, Sajid Ullah Butt

Abstract:

Electromagnetic Interference (EMI) shielded doors made from brass extruded channels need to be welded with shielded enclosures to attain optimum shielding performance. Control of welding induced distortion is a problem in welding dissimilar metals like steel and brass. In this research, soldering of the steel-brass joint has been proposed to avoid weld distortion. The material used for brass channel is UNS C36000. The thickness of brass is defined by the manufacturing process, i.e. extrusion. The thickness of shielded enclosure material (ASTM A36) can be varied to produce joint between the dissimilar metals. Steel sections of different gauges are soldered using (91% tin, 9% zinc) solder to the brass, and strength of joint is measured by standard test procedures. It is observed that thin steel sheets produce a stronger bond with brass. The steel sections further require to be welded with shielded enclosure steel sheets through TIG welding process. Stresses and deformation in the vicinity of soldered portion is calculated through FE simulation. Crack formation in soldered area is also studied through experimental work. It has been found that in thin sheets deformation produced due to applied force is localized and has no effect on soldered joint area whereas in thick sheets profound cracks have been observed in soldered joint. The shielding effectiveness of EMI shielded door is compromised due to these cracks. The shielding effectiveness of the specimens is tested and results are compared.

Keywords: dissimilar metal, EMI shielding, joint strength, soldering

Procedia PDF Downloads 155
2986 Tele-Monitoring and Logging of Patient Health Parameters Using Zigbee

Authors: Kirubasankar, Sanjeevkumar, Aravindh Nagappan

Abstract:

This paper addresses a system for monitoring patients using biomedical sensors and displaying it in a remote place. The main challenges in present health monitoring devices are lack of remote monitoring and logging for future evaluation. Typical instruments used for health parameter measurement provide basic information regarding health status. This paper identifies a set of design principles to address these challenges. This system includes continuous measurement of health parameters such as Heart rate, electrocardiogram, SpO2 level and Body temperature. The accumulated sensor data is relayed to a processing device using a transceiver and viewed by the implementation of cloud services.

Keywords: bio-medical sensors, monitoring, logging, cloud service

Procedia PDF Downloads 505
2985 Multi-Focus Image Fusion Using SFM and Wavelet Packet

Authors: Somkait Udomhunsakul

Abstract:

In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.

Keywords: multi-focus image fusion, wavelet packet, spatial frequency measurement

Procedia PDF Downloads 464