Search results for: noise measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3875

Search results for: noise measurements

815 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 137
814 Investigation of the Growth Kinetics of Phases in Ni–Sn System

Authors: Varun A Baheti, Sanjay Kashyap, Kamanio Chattopadhyay, Praveen Kumar, Aloke Paul

Abstract:

Ni–Sn system finds applications in the microelectronics industry, especially with respect to flip–chip or direct chip, attach technology. Here the region of interest is under bump metallization (UBM), and solder bump (Sn) interface due to the formation of brittle intermetallic phases there. Understanding the growth of these phases at UBM/Sn interface is important, as in many cases it controls the electro–mechanical properties of the product. Cu and Ni are the commonly used UBM materials. Cu is used for good bonding because of fast reaction with solder and Ni often acts as a diffusion barrier layer due to its inherently slower reaction kinetics with Sn–based solders. Investigation on the growth kinetics of phases in Ni–Sn system is reported in this study. Just for simplicity, Sn being major solder constituent is chosen. Ni–Sn electroplated diffusion couples are prepared by electroplating pure Sn on Ni substrate. Bulk diffusion couples prepared by the conventional method are also studied along with Ni–Sn electroplated diffusion couples. Diffusion couples are annealed for 25–1000 h at 50–215°C to study the phase evolutions and growth kinetics of various phases. The interdiffusion zone was analysed using field emission gun equipped scanning electron microscope (FE–SEM) for imaging. Indexing of selected area diffraction (SAD) patterns obtained from transmission electron microscope (TEM) and composition measurements done in electron probe micro−analyser (FE–EPMA) confirms the presence of various product phases grown across the interdiffusion zone. Time-dependent experiments indicate diffusion controlled growth of the product phase. The estimated activation energy in the temperature range 125–215°C for parabolic growth constants (and hence integrated interdiffusion coefficients) of the Ni₃Sn₄ phase shed light on the growth mechanism of the phase; whether its grain boundary controlled or lattice controlled diffusion. The location of the Kirkendall marker plane indicates that the Ni₃Sn₄ phase grows mainly by diffusion of Sn in the binary Ni–Sn system.

Keywords: diffusion, equilibrium phase, metastable phase, the Ni-Sn system

Procedia PDF Downloads 307
813 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions

Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier

Abstract:

Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).

Keywords: dispersibility, stability, Hansen parameters, particles, solvents

Procedia PDF Downloads 110
812 The Effect of Artificial Intelligence on Digital Factory

Authors: Sherif Fayez Lewis Ghaly

Abstract:

up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.

Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring

Procedia PDF Downloads 28
811 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 255
810 Redefining Success Beyond Borders: A Deep Dive into Effective Methods to Boost Morale Among Virtual Workers for Exponential Project Performance

Authors: Florence Ibeh, David Oyewmi Oyekunle, David Boohene

Abstract:

The continuous advancement of information technology has completely transformed how businesses and organizations operate on a global scale. The widespread availability of virtual communication tools enables individuals to opt for remote work. While remote employment offers various benefits, such as facilitating corporate growth and enhancing customer support, it also presents distinct challenges. Therefore, investigating the intricacies of virtual team morale is crucial for ensuring the achievement of project objectives. For this study, content analysis of pre-existing secondary data was employed to examine the phenomenon. Essential elements vital for improving the success of projects within virtual teams were identified. These factors include technology adoption, creating a distraction-free work environment, effective leadership, trust-building, clear communication channels, well-defined task allocation, active team participation, and motivation. Furthermore, the study established a substantial correlation between morale levels and the participation and productivity of virtual team members. Higher levels of morale were associated with optimal performance among virtual teams. The study determined that the key factors for enhancing project performance in virtual teams are the adoption of technology, a focused environment, effective leadership, trust, communication, well-defined tasks, collaborative teamwork, and motivation. Additionally, the study discovered that modifying the optimal strategies employed by in-office teams can enhance the diminished morale prevalent in remote teams to sustain a high level of team morale for virtual teams. The findings of this study are highly significant in the dynamic field of project management. Currently, there is limited information regarding strategies that address challenges arising from external factors in virtual teams, such as ambient noise and disruptions caused by family members. The findings underscore the significance of selecting appropriate communication technologies, delineating distinct roles and responsibilities for virtual team members, and nurturing a culture of accountability and trust. Promoting seamless collaboration and instilling motivation among virtual team members are deemed highly effective in augmenting employee engagement and performance within virtual team setting.

Keywords: virtual teams, morale, project performance, distract-free environment, technology adaptation

Procedia PDF Downloads 95
809 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
808 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units

Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro

Abstract:

In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.

Keywords: capacitated clustering, k-means, genetic algorithm, districting problems

Procedia PDF Downloads 198
807 A Low-Cost of Foot Plantar Shoes for Gait Analysis

Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari

Abstract:

This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.

Keywords: gait analysis, plantar pressure, force plate, earable sensor

Procedia PDF Downloads 453
806 The Impact of Nutrition Education Intervention in Improving the Nutritional Status of Sickle Cell Patients

Authors: Lindy Adoma Dampare, Marina Aferiba Tandoh

Abstract:

Sickle cell disease (SCD) is an inherited blood disorder that mostly affects individuals in sub-Saharan Africa. Nutritional deficiencies have been well established in SCD patients. In Ghana, studies have revealed the prevalence of malnutrition, especially amongst children with SCD and hence the need to develop an evidence-based comprehensive nutritional therapy for SCD to improve their nutritional status. The aim of the study was to develop and assess the effect of a nutrition education material on the nutritional status of SCD patients in Ghana. This was a pre-post interventional study. Patients between the ages of 2 to 60 years were recruited from the Tema General Hospital. Following a baseline nutrition knowledge (NK), beliefs, sanitary practice and dietary consumption pattern assessment, a twice-monthly nutrition education was carried out for 3 months, followed by a post-intervention assessment. Nutritional status of SCD patients was assessed using a 3-days dietary recall and anthropometric measurements. Nutrition education (NE) was given to SCD adults and caregivers of SCD children. Majority of the caregivers (69%) and SCD adult (82%) at baseline had low NK. The level of NK improved significantly in SCD adults (4.18±1.83 vs. 10.00±1.00, p<0.001) and caregivers (5.58 ± 2.25 vs.10.44± 0.846, p<0.001) after NE. Increase in NK improved dietary intake and dietary consumption pattern of SCD patients. Significant increase in weight (23.2±11.6 vs. 25.9±12.1, p=0.036) and height (118.5±21.9 vs. 123.5±22.2, p=0.011) was observed in SCD children at post intervention. Stunting (10.5% vs. 8.6%, p=0.62) and wasting (22.1% vs. 14.4%, p=0.30) reduced in SCD children after NE although not statistically significant. Reduction (18.2% vs. 9.1%) in underweight and an increase (18.2% vs. 27.3%) in overweight SCD adults was recorded at post intervention. Fat mass remained the same while high muscle mass increased (18.2% vs. 27.3%) at post intervention in SCD adult. Anaemic status of SCD patients improved at post intervention and the improvement was statistically significant amongst SCD children. Nutrition education improved the NK of SCD caregivers and adults hence, improving the dietary consumption pattern and nutrient intake of SCD patients. Overall, NE improved the nutritional status of SCD patients. This study shows the potential of nutrition education in improving the nutritional knowledge, dietary consumption pattern, dietary intake and nutritional status of SCD patients, and should be further explored.

Keywords: sickle cell disease, nutrition education, dietary intake, nutritional status

Procedia PDF Downloads 103
805 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 60
804 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 162
803 Loss of Green Space in Urban Metropolitan and Its Alarming Impacts on Teenagers' Life: A Case Study on Dhaka

Authors: Nuzhat Sharmin

Abstract:

Human being is the most integral part of the nature and responsible for maintaining ecological balance both in rural and urban areas. But unfortunately, we are not doing our job with a holistic approach. The rapid growth of urbanization is making human life more isolated from greenery. Nowadays modern urban living involves sensory deprivation and overloaded stress. In many cities and towns of the world are expanding unabated in the name of urbanization and industrialization and in fact becoming jungles of concrete. Dhaka is one of the examples of such cities where open and green spaces are decreasing because of accommodating the overflow of population. This review paper has been prepared based on interviewing 30 teenagers, both male and female in Dhaka city. There were 12 open-ended questions in the questionnaire. For the literature review information had been gathered from scholarly papers published in various peer-reviewed journals. Some information was collected from the newspapers and some from fellow colleagues working around the world. Ideally about 25% of an urban area should be kept open or with parks, fields and/or plants and vegetation. But currently Dhaka has only about 10-12% open space and these also are being filled up rapidly. Old Dhaka has only about 5% open space while the new Dhaka has about 12%. Dhaka is now one of the most populated cities in the world. Accommodating this huge influx of people Dhaka is continuously losing its open space. As a result, children and teenagers are losing their interest in playing games and making friends, rather they are mostly occupied by television, gadgets and social media. It has been known from the interview that only 28% of teenagers regularly play. But the majority of them have to play on the street and rooftop for the lack of open space. On an average they are occupied with electronic devices for 8.3 hours/day. 64% of them has chronic diseases and often visit doctors. Most shockingly 35% of them claimed for not having any friends. Green space offers relief from stress. Areas of natural environment in towns and cities are theoretically seen providing setting for recovery and recuperation from anxiety and strains of the urban environment. Good quality green spaces encourage people to walk, run, cycle and play. Green spaces improve air quality and reduce noise, while trees and shrubbery help to filter out dust and pollutants. Relaxation, contemplation and passive recreation are essential to stress management. All city governments that are losing its open spaces should immediately pay attention to this aesthetic issue for the benefit of urban people. All kinds of development must be sustainable both for human being and nature.

Keywords: greenery, health, human, urban

Procedia PDF Downloads 175
802 Relation of Mean Platelet Volume with Serum Paraoxonase-1 Activity and Brachial Artery Diameter and Intima Media Thickness in Diabetic Patients with Respect to Obesity and Diabetic Complications

Authors: Pınar Karakaya, Meral Mert, Yildiz Okuturlar, Didem Acarer, Asuman Gedikbasi, Filiz Islim, Teslime Ayaz, Ozlem Soyluk, Ozlem Harmankaya, Abdulbaki Kumbasar

Abstract:

Objective: To evaluate the relation of mean platelet volume (MPV) levels with serum paraoxonase-1 activity and brachial artery diameter and intima media thickness in diabetic patients with respect to obesity and diabetic complications. Methods: A total of 201 diabetic patients grouped with respect to obesity [obese (n=89) and non-obese (n=112) and diabetic complications [with (n=50) or without (n=150) microvascular complications and with (n=91) or without (n=108) macrovascular complications] groups were included. Data on demographic and lifestyle characteristics of patients, anthropometric measurements, diabetes related microvascular and macrovascular complications, serum levels for MPV, bBrachial artery diameter and intima media thickness (IMT) and serum paraoxonase and arylesterase activities were recorded. Correlation of MPV values to paraoxonase and arylesterase activities as well as to brachial artery diameter and IMT was evaluated in study groups. Results: Mean(SD) paraoxonase and arylesterase values were 119.8(37.5) U/L and 149.0(39.9) U/L, respectively in the overall population with no significant difference with respect to obesity and macrovascular diabetic complications, whereas significantly lower values for paraoxonase (107.5(30.7) vs. 123.9(38.8) U/L, p=0.007) and arylesterase (132.1(30.2) vs. 154.7(41.2) U/L, p=0.001) were noted in patients with than without diabetic microvascular complications. Mean(SD) MPV values were 9.10 (0.87) fL in the overall population with no significant difference with respect to obesity and diabetic complications. No significant correlation of MPV values to paraoxonase, arylesterase activities, to brachial artery diameter and IMT was noted in the overall study population as well as in study groups. Conclusion: In conclusion, our findings revealed a significant decrease I PON-1 activity in diabetic patients with microvascular rather than macrovascular complications, whereas regardless of obesity and diabetic complications, no increase in thrombogenic activity and no relation of thrombogenic activity with PON-1 activity and brachial artery diameter and IMK.

Keywords: atherosclerosis, diabetes mellitus, microvascular complications, macrovascular complications, obesity, paraoxonase

Procedia PDF Downloads 356
801 Waist Circumference-Related Performance of Tense Indices during Varying Pediatric Obesity States and Metabolic Syndrome

Authors: Mustafa Metin Donma

Abstract:

Obesity increases the risk of elevated blood pressure, which is a metabolic syndrome (MetS) component. Waist circumference (WC) is accepted as an indispensable parameter for the evaluation of these health problems. The close relationship of height with blood pressure values revealed the necessity of including height in tense indices. The association of tense indices with WC has also become an increasingly important topic. The purpose of this study was to develop a tense index that could contribute to differential diagnosis of MetS more than the indices previously introduced. One hundred and ninety-four children, aged 06-11 years, were considered to constitute four groups. The study was performed on normal weight (Group 1), overweight+obese (Group 2), morbid obese [without (Group 3) and with (Group 4) MetS findings] children. Children were included in the groups according to the recommendations of World Health Organization based on age- and gender dependent body mass index percentiles. For MetS group, MetS components well-established before were considered. Anthropometric measurements, as well as blood pressure values were taken. Tense indices were computed. The formula for the first tense index was (SP+DP)/2. The second index was Advanced Donma Tense Index (ADTI). The formula for this index was [(SP+DP)/2] * Height. Statistical calculations were performed. 0.05 was accepted as the p value indicating statistical significance. There were no statistically significant differences between the groups for pulse pressure, systolic-to-diastolic pressure ratio and tense index. Increasing values were observed from Group 1 to Group 4 in terms of mean arterial blood pressure and advanced Donma tense index (ADTI), which was highly correlated with WC in all groups except Group 1. Both tense index and ADTI exhibited significant correlations with WC in Group 3. However, in Group 4, ADTI, which includes height parameter in the equation, was unique in establishing a strong correlation with WC. In conclusion, ADTI was suggested as a tense index while investigating children with MetS.

Keywords: blood pressure, child, height, metabolic syndrome, waist circumference

Procedia PDF Downloads 58
800 Efficiency Validation of Hybrid Geothermal and Radiant Cooling System Implementation in Hot and Humid Climate Houses of Saudi Arabia

Authors: Jamil Hijazi, Stirling Howieson

Abstract:

Over one-quarter of the Kingdom of Saudi Arabia’s total oil production (2.8 million barrels a day) is used for electricity generation. The built environment is estimated to consume 77% of the total energy production. Of this amount, air conditioning systems consume about 80%. Apart from considerations surrounding global warming and CO2 production it has to be recognised that oil is a finite resource and the KSA like many other oil rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground cooling pipes in combination with black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing carbon emissions while providing all year round thermal comfort in a typical Saudi Arabian urban housing block. At the outset air and soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (Design Builder) that utilised the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/ stack ventilation and radiant cooling pipes embed in floor).Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.

Keywords: energy efficiency, ground pipe, hybrid cooling, radiative cooling, thermal comfort

Procedia PDF Downloads 262
799 Healthcare Professional’s Well-Being: Case Study of Two Care Units in a Big Hospital in Canada

Authors: Zakia Hammouni

Abstract:

Healthcare professionals’ well-being is becoming a priority during this Covid-19 pandemic due to stress, fatigue, and workload. Well before this pandemic, contemporary hospitals are endowed with environmental attributes that contribute to achieving well-being within their environment with the emphasis on the patient. The patient-centered care approach has been followed by the patient-centered design approach. Studies that have focused on the physical environment in hospitals have dealt with the patient's recovery process and his well-being. Prior scientific literature has placed less emphasis on the healthcare professionals’ interactions within the physical environment and to guide hospital designers to make evidence-based design choices to meet the needs and expectations of hospital users by considering, in addition to patients, healthcare professionals. This paper examines these issues related to the daily stress of professionals who provide care in a hospital environment. In this exploratory study, the interest was to grasp the issues related to this environment and explores the current realities of newly built hospitals based on design approaches and what attributes of the physical setting support healthcare professional’s well-being. Within a constructivist approach, this study was conducted in two care units in a new hospital in a big city in Canada before the Covid-19 pandemic (august 2nd to November 2nd 2018). A spatial evaluation of these care units allowed us to understand the interaction of health professionals in their work environment, to understand the spatial behavior of these professionals, and the narratives from 44 interviews of various healthcare professionals. The mental images validated the salient components of the hospital environment as perceived by these healthcare professionals. Thematic analysis and triangulation of the data set were conducted. Among the key attributes promoting the healthcare professionals’ well-being as revealed by the healthcare professionals are the overall light-color atmosphere in the hospital and care unit, particularly in the corridors and public areas of the hospital, the maintenance and cleanliness. The presence of the art elements also brings well-being to the health professionals as well as panoramic views from the staff lounge and corridors of the care units or elevator lobbies. Despite the overall positive assessment of this environment, some attributes need to be improved to ensure the well-being of healthcare professionals and to provide them with a restructuring environment. These are the supply of natural light, softer colors, sufficient furniture, comfortable seating in the restroom, and views, which are important in allowing these healthcare professionals to recover from their work stress. Noise is another attribute that needs to be further improved in the hospital work environment, especially in the nursing workstations and consultant's room. In conclusion, this study highlights the importance of providing healthcare professionals with work and rest areas that allow them to resist the stress they face, particularly during periods of extreme stress and fatigue such as a Covid-19 pandemic.

Keywords: healthcare facilities, healthcare professionals, physical environment, well-being

Procedia PDF Downloads 127
798 Comparison between Open and Closed System for Dewatering with Geotextile: Field and Comparative Study

Authors: Matheus Müller, Delma Vidal

Abstract:

The present paper aims to expose two techniques of dewatering for sludge, analyzing its operations and dewatering processes, aiming at improving the conditions of disposal of residues with high liquid content. It describes the field tests performed on two geotextile systems, a closed geotextile tube and an open geotextile drying bed, both of which are submitted to two filling cycles. The sludge used in the filling cycles for the field trials is from the water treatment plant of the Technological Center of Aeronautics – CTA, in São José dos Campos, Brazil. Data about volume and height abatement due to the dewatering and consolidation were collected per time, until it was observed constancy. With the laboratory analysis of the sludge allied to the data collected in the field, it was possible to perform a critical comparative study between the observed and the scientific literature, in this way, this paper expresses the data obtained and compares them with the bibliography. The tests were carried out on three fronts: field tests, including the filling cycles of the systems with the sludge from CTA, taking measurements of filling time per cycle and maximum filling height per cycle, heights against the abatement by dewatering of the systems over time; tests carried out in the laboratory, including the characterization of the sludge and removal of material samples from the systems to ascertain the solids content within the systems per time and; comparing the data obtained in the field and laboratory tests with the scientific literature. Through the study, it was possible to perceive that the process of densification of the material inside a closed system, such as the geotextile tube, occurs faster than the observed in the drying bed system. This process of accelerated densification can be brought about by the pumping pressure of the sludge in its filling and by the confinement of the residue through the permeable geotextile membrane (allowing water to pass through), accelerating the process of densification and dewatering by its own weight after the filling with sludge.

Keywords: consolidation, dewatering, geotextile drying bed, geotextile tube

Procedia PDF Downloads 127
797 Flow-Control Effectiveness of Convergent Surface Indentations on an Aerofoil at Low Reynolds Numbers

Authors: Neel K. Shah

Abstract:

Passive flow control on aerofoils has largely been achieved through the use of protrusions such as vane-type vortex generators. Consequently, innovative flow-control concepts should be explored in an effort to improve current component performance. Therefore, experimental research has been performed at The University of Manchester to evaluate the flow-control effectiveness of a vortex generator made in the form of a surface indentation. The surface indentation has a trapezoidal planform. A spanwise array of indentations has been applied in a convergent orientation around the maximum-thickness location of the upper surface of a NACA-0015 aerofoil. The aerofoil has been tested in a two-dimensional set-up in a low-speed wind tunnel at an angle of attack (AoA) of 3° and a chord-based Reynolds number (Re) of ~2.7 x 105. The baseline model has been found to suffer from a laminar separation bubble at low AoA. The application of the indentations at 3° AoA has considerably shortened the separation bubble. The indentations achieve this by shedding up-flow pairs of streamwise vortices. Despite the considerable reduction in bubble length, the increase in leading-edge suction due to the shorter bubble is limited by the removal of surface curvature and blockage (increase in surface pressure) caused locally by the convergent indentations. Furthermore, the up-flow region of the vortices, which locally weakens the pressure recovery around the trailing edge of the aerofoil by thickening the boundary layer, also contributes to this limitation. Due to the conflicting effects of the indentations, the changes in the pressure-lift and pressure-drag coefficients, i.e., cl,p and cd,p, are small. Nevertheless, the indentations have improved cl,p and cd,p beyond the uncertainty range, i.e., by ~1.30% and ~0.30%, respectively, at 3° AoA. The wake measurements show that turbulence intensity and Reynolds stresses have considerably increased in the indented case, thus implying that the indentations increase the viscous drag on the model. In summary, the convergent indentations are able to reduce the size of the laminar separation bubble, but conversely, they are not highly effective in reducing cd,p at the tested Reynolds number.

Keywords: aerofoil flow control, laminar separation bubbles, low Reynolds-number flows, surface indentations

Procedia PDF Downloads 226
796 Predicting the Turbulence Intensity, Excess Energy Available and Potential Power Generated by Building Mounted Wind Turbines over Four Major UK City

Authors: Emejeamara Francis

Abstract:

The future of potentials wind energy applications within suburban/urban areas are currently faced with various problems. These include insufficient assessment of urban wind resource, and the effectiveness of commercial gust control solutions as well as unavailability of effective and cheaper valuable tools for scoping the potentials of urban wind applications within built-up environments. In order to achieve effective assessment of the potentials of urban wind installations, an estimation of the total energy that would be available to them were effective control systems to be used, and evaluating the potential power to be generated by the wind system is required. This paper presents a methodology of predicting the power generated by a wind system operating within an urban wind resource. This method was developed by using high temporal resolution wind measurements from eight potential sites within the urban and suburban environment as inputs to a vertical axis wind turbine multiple stream tube model. A relationship between the unsteady performance coefficient obtained from the stream tube model results and turbulence intensity was demonstrated. Hence, an analytical methodology for estimating the unsteady power coefficient at a potential turbine site is proposed. This is combined with analytical models that were developed to predict the wind speed and the excess energy (EEC) available in estimating the potential power generated by wind systems at different heights within a built environment. Estimates of turbulence intensities, wind speed, EEC and turbine performance based on the current methodology allow a more complete assessment of available wind resource and potential urban wind projects. This methodology is applied to four major UK cities namely Leeds, Manchester, London and Edinburgh and the potential to map the turbine performance at different heights within a typical urban city is demonstrated.

Keywords: small-scale wind, turbine power, urban wind energy, turbulence intensity, excess energy content

Procedia PDF Downloads 277
795 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach

Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic

Abstract:

The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.

Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning

Procedia PDF Downloads 185
794 Post Occupancy Evaluation of Thermal Comfort and User Satisfaction in a Green IT Commercial Building

Authors: Shraddha Jadhav

Abstract:

We are entering a new age in the built environment where we expect our buildings to deliver far more than just a place to work or live. It is widely believed that sustainable building design strategies create improved occupants’ comfort & satisfaction with respect to thermal comfort & indoor environmental quality. Yet this belief remains a hypothesis with little empirical support. IT buildings cater to more than 3000 users at a time. Nowadays people spend 90% of the time inside offices. These sustainable IT office buildings should provide the occupants with maximum comfort for better work productivity. Such green rated buildings fulfill all the criteria at the designing stage, but do they really work as expected at the occupancy stage. The aim of this paper is to evaluate whether green IT buildings provide the required comfort level as expected at the design stage. Building Occupants are a rich source of information for evaluating their comfort level in the building and to find out the solutions for their discomfort. This can be achieved by carrying out Post Occupancy Evaluation after the building has been occupied for more than a year or two. The technique consists of qualitative methods like questionnaire surveys & observations and quantitative methods like field measurements, photographs. Post Occupancy Evaluation was carried out in a Green (Platinum rated) IT building in Pune. 30 samples per floor were identified for the questionnaire survey. The core questions access occupant satisfaction with thermal comfort in the work area and measures adopted for making it comfortable were identified. The Mean Radiant Temperature of the same samples was taken to compare the quantitative and qualitative results. The survey was used to evaluate the occupant thermal comfort in a green office building and identify areas needing improvement. The survey has been designed in reference to ASHRAE standard 55-2010 & ISHRAE 10001:2017 IEQ and was further refined to suit the user of the building.

Keywords: green office building, building occupant, thermal comfort, POE, user satisfaction, survey

Procedia PDF Downloads 73
793 Analytical Characterization of TiO2-Based Nanocoatings for the Protection and Preservation of Architectural Calcareous Stone Monuments

Authors: Sayed M. Ahmed, Sawsan S. Darwish, Mahmoud A. Adam, Nagib A. Elmarzugi, Mohammad A. Al-Dosari, Nadia A. Al-Mouallimi

Abstract:

Historical stone surfaces and architectural heritage especially which located in open areas may undergo unwanted changes due to the exposure to many physical and chemical deterioration factors, air pollution, soluble salts, Rh/temperature, and biodeterioration are the main causes of decay of stone building materials. The development and application of self-cleaning treatments on historical and architectural stone surfaces could be a significant improvement in conservation, protection, and maintenance of cultural heritage. In this paper, nanometric titanium dioxide has become a promising photocatalytic material owing to its ability to catalyze the complete degradation of many organic contaminants and represent an appealing way to create self-cleaning surfaces, thus limiting maintenance costs, and to promote the degradation of polluting agents. The obtained nano-TiO2 coatings were applied on travertine (Marble and limestone often used in historical and monumental buildings). The efficacy of the treatments has been evaluated after coating and artificial thermal aging, through capillary water absorption, Ultraviolet-light exposure to evaluate photo-induced and the hydrophobic effects of the coated surface, while the surface morphology before and after treatment was examined by scanning electron microscopy (SEM). The changes of molecular structure occurring in treated samples were spectroscopy studied by FTIR-ATR, and Colorimetric measurements have been performed to evaluate the optical appearance. All the results get together with the apparent effect that coated TiO2 nanoparticles is an innovative method, which enhanced the durability of stone surfaces toward UV aging, improved their resistance to relative humidity and temperature, self-cleaning photo-induced effects are well evident, and no alteration of the original features.

Keywords: architectural calcareous stone monuments, coating, photocatalysis TiO2, self-cleaning, thermal aging

Procedia PDF Downloads 254
792 Modified Norhaya Upper Limp Elevation Sling-Quick Approach Ensuring Timely Limb Elevation

Authors: Prem, Norhaya, Vwrene C., Mohammad Harris A., Amarjit, Fazir M.

Abstract:

Upper limb surgery is a common orthopedic procedure. After surgery, it is necessary to raise the patient's arm to reduce limb swelling and promote recovery. After an injury or surgery, swelling (edema) in the limbs is common. This swelling can be painful, cause stiffness, and affect movement and ability to do daily activities. One of the easiest ways to manage swelling is to elevate the swollen limb. The goal is to elevate the swollen limb slightly above the level of the heart. This helps the extra fluid move back towards the heart for circulation to the rest of the body. Conventional arm sling or pillows are usually placed under the arm to raise it, but in this way the arm cannot be fixed well and easily slide down, without ideal raising effect. Conventional arm sling need experience to tie the sling and this delay in the application process. To reduce the waiting time and cost, modified Norhaya upper limb elevation sling was designed and made readily available. The sling is made from calico fabric, readily available in the ward. Measurements of patients’ arm lengths are obtained, and fabric sizes are cut into the average arm lengths, as well as 1 size above and below. The cut calico fabric is then sewn together with thick sewing threads. Its application is easy and junior most staff or doctor will be able to apply it on patient. The time taken to set up the sling is also reduced. Feedback gathered from ground staff regarding ease of setting up the sling was tremendous and patient also feel comfort in the modified Norhaya sling. The device can freely adjust the raising height of the affected limb and effectively fix the affected limb to reduce its swelling, thus promoting recovery. This device is worthy to be clinically popularized and applied. The Modified Norhaya upper limb elevation sling is the quickest to set up and the delay in elevating the patient’s hand is significantly reduced. Moreover, it is reproducible and there is also significant cost savings.

Keywords: elevate, effective, sling, timely

Procedia PDF Downloads 206
791 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches

Authors: Guerich Mohamed, Assaf Samir

Abstract:

The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.

Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam

Procedia PDF Downloads 147
790 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel

Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn

Abstract:

Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.

Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method

Procedia PDF Downloads 481
789 Tree Resistance to Wind Storm: The Effects of Soil Saturation on Tree Anchorage of Young Pinus pinaster

Authors: P. Defossez, J. M. Bonnefond, D. Garrigou, P. Trichet, F. Danjon

Abstract:

Windstorm damage to European forests has ecological, social and economic consequences of major importance. Most trees during storms are uprooted. While a large amount of work has been done over the last decade on understanding the aerial tree response to turbulent wind flow, much less is known about the root-soil interface, and the impact of soil moisture and root-soil system fatiguing on tree uprooting. Anchorage strength is expected to be reduced by water-logging and heavy rain during storms due to soil strength decrease with soil water content. Our paper is focused on the maritime pine cultivated on sandy soil, as a representative species of the Forêt des Landes, the largest cultivated forest in Europe. This study aims at providing knowledge on the effects of soil saturation on root anchorage. Pulling experiments on trees were performed to characterize the resistance to wind by measuring the critical bending moment (Mc). Pulling tests were performed on 12 maritime pines of 13-years old for two unsaturated soil conditions that represent the soil conditions expected in winter when wind storms occur in France (w=11.46 to 23.34 % gg⁻¹). A magnetic field digitizing technique was used to characterize the three-dimensional architecture of root systems. The soil mechanical properties as function of soil water content were characterized by laboratory mechanical measurements as function of soil water content and soil porosity on remolded samples using direct shear tests at low confining pressure ( < 15 kPa). Remarkably Mc did not depend on w but mainly on the root system morphology. We suggested that the importance of soil water conditions on tree anchorage depends on the tree size. This study gives a new insight on young tree anchorage: roots may sustain by themselves anchorage, whereas adhesion between roots and surrounding soil may be negligible in sandy soil.

Keywords: roots, sandy soil, shear strength, tree anchorage, unsaturated soil

Procedia PDF Downloads 293
788 Assessment of Rangeland Condition in a Dryland System Using UAV-Based Multispectral Imagery

Authors: Vistorina Amputu, Katja Tielboerger, Nichola Knox

Abstract:

Primary productivity in dry savannahs is constraint by moisture availability and under increasing anthropogenic pressure. Thus, considering climate change and the unprecedented pace and scale of rangeland deterioration, methods for assessing the status of such rangelands should be easy to apply, yield reliable and repeatable results that can be applied over large spatial scales. Global and local scale monitoring of rangelands through satellite data and labor-intensive field measurements respectively, are limited in accurately assessing the spatiotemporal heterogeneity of vegetation dynamics to provide crucial information that detects degradation in its early stages. Fortunately, newly emerging techniques such as unmanned aerial vehicles (UAVs), associated miniaturized sensors and improving digital photogrammetric software provide an opportunity to transcend these limitations. Yet, they have not been extensively calibrated in natural systems to encompass their complexities if they are to be integrated for long-term monitoring. Limited research using drone technology has been conducted in arid savannas, for example to assess the health status of this dynamic two-layer vegetation ecosystem. In our study, we fill this gap by testing the relationship between UAV-estimated cover of rangeland functional attributes and field data collected in discrete sample plots in a Namibian dryland savannah along a degradation gradient. The first results are based on a supervised classification performed on the ultra-high resolution multispectral imagery to distinguish between rangeland functional attributes (bare, non-woody, and woody), with a relatively good match to the field observations. Integrating UAV-based observations to improve rangeland monitoring could greatly assist in climate-adapted rangeland management.

Keywords: arid savannah, degradation gradient, field observations, narrow-band sensor, supervised classification

Procedia PDF Downloads 134
787 Monitoring of Water Quality Using Wireless Sensor Network: Case Study of Benue State of Nigeria

Authors: Desmond Okorie, Emmanuel Prince

Abstract:

Availability of portable water has been a global challenge especially to the developing continents/nations such as Africa/Nigeria. The World Health Organization WHO has produced the guideline for drinking water quality GDWQ which aims at ensuring water safety from source to consumer. Portable water parameters test include physical (colour, odour, temperature, turbidity), chemical (PH, dissolved solids) biological (algae, plytoplankton). This paper discusses the use of wireless sensor networks to monitor water quality using efficient and effective sensors that have the ability to sense, process and transmit sensed data. The integration of wireless sensor network to a portable sensing device offers the feasibility of sensing distribution capability, on site data measurements and remote sensing abilities. The current water quality tests that are performed in government water quality institutions in Benue State Nigeria are carried out in problematic locations that require taking manual water samples to the institution laboratory for examination, to automate the entire process based on wireless sensor network, a system was designed. The system consists of sensor node containing one PH sensor, one temperature sensor, a microcontroller, a zigbee radio and a base station composed by a zigbee radio and a PC. Due to the advancement of wireless sensor network technology, unexpected contamination events in water environments can be observed continuously. local area network (LAN) wireless local area network (WLAN) and internet web-based also commonly used as a gateway unit for data communication via local base computer using standard global system for mobile communication (GSM). The improvement made on this development show a water quality monitoring system and prospect for more robust and reliable system in the future.

Keywords: local area network, Ph measurement, wireless sensor network, zigbee

Procedia PDF Downloads 172
786 Prevalence of Malnutrition and Associated Factors among Children Aged 6-59 Months at Hidabu Abote District, North Shewa, Oromia Regional State

Authors: Kebede Mengistu, Kassahun Alemu, Bikes Destaw

Abstract:

Introduction: Malnutrition continues to be a major public health problem in developing countries. It is the most important risk factor for the burden of diseases. It causes about 300, 000 deaths per year and responsible for more than half of all deaths in children. In Ethiopia, child malnutrition rate is one of the most serious public health problem and the highest in the world. High malnutrition rates in the country pose a significant obstacle to achieving better child health outcomes. Objective: To assess prevalence of malnutrition and associated factors among children aged 6-59 months at Hidabu Abote district, North shewa, Oromia. Methods: A community based cross sectional study was conducted on 820 children aged 6-59 months from September 8-23, 2012 at Hidabu Abote district. Multistage sampling method was used to select households. Children were selected from each kebeles by simple random sampling. Anthropometric measurements and structured questioners were used. Data was processed using EPi-info soft ware and exported to SPSS for analysis. Then after, sex, age, months, height, and weight transferred with HHs number to ENA for SMART 2007software to convert nutritional data into Z-scores of the indices; H/A, W/H and W/A. Bivariate and multivariate logistic regressions were used to identify associated factors of malnutrition. Results: The analysis this study revealed that, 47.6%, 30.9% and 16.7% of children were stunted, underweight and wasted, respectively. The main associated factors of stunting were found to be child age, family monthly income, children were received butter as pre-lacteal feeding and family planning. Underweight was associated with number of children HHs and children were received butter as per-lacteal feeding but un treatment of water in HHs only associated with wasting. Conclusion and recommendation: From the findings of this study, it is concluded that malnutrition is still an important problem among children aged 6-59 months. Therefore, especial attention should be given on intervention of malnutrition.

Keywords: children, Hidabu Abote district, malnutrition, public health

Procedia PDF Downloads 427