Search results for: motion sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2644

Search results for: motion sensor

214 Tunnel Convergence Monitoring by Distributed Fiber Optics Embedded into Concrete

Authors: R. Farhoud, G. Hermand, S. Delepine-lesoille

Abstract:

Future underground facility of French radioactive waste disposal, named Cigeo, is designed to store intermediate and high level - long-lived French radioactive waste. Intermediate level waste cells are tunnel-like, about 400m length and 65 m² section, equipped with several concrete layers, which can be grouted in situ or composed of tunnel elements pre-grouted. The operating space into cells, to allow putting or removing waste containers, should be monitored for several decades without any maintenance. To provide the required information, design was performed and tested in situ in Andra’s underground laboratory (URL) at 500m under the surface. Based on distributed optic fiber sensors (OFS) and backscattered Brillouin for strain and Raman for temperature interrogation technics, the design consists of 2 loops of OFS, at 2 different radiuses, around the monitored section (Orthoradiale strains) and longitudinally. Strains measured by distributed OFS cables were compared to classical vibrating wire extensometers (VWE) and platinum probes (Pt). The OFS cables were composed of 2 cables sensitive to strains and temperatures and one only for temperatures. All cables were connected, between sensitive part and instruments, to hybrid cables to reduce cost. The connection has been made according to 2 technics: splicing fibers in situ after installation or preparing each fiber with a connector and only plugging them together in situ. Another challenge was installing OFS cables along a tunnel mad in several parts, without interruption along several parts. First success consists of the survival rate of sensors after installation and quality of measurements. Indeed, 100% of OFS cables, intended for long-term monitoring, survived installation. Few new configurations were tested with relative success. Measurements obtained were very promising. Indeed, after 3 years of data, no difference was observed between cables and connection methods of OFS and strains fit well with VWE and Pt placed at the same location. Data, from Brillouin instrument sensitive to strains and temperatures, were compensated with data provided by Raman instrument only sensitive to temperature and into a separated fiber. These results provide confidence in the next steps of the qualification processes which consists of testing several data treatment approach for direct analyses.

Keywords: monitoring, fiber optic, sensor, data treatment

Procedia PDF Downloads 128
213 The Clinical Effectiveness of Off-The-Shelf Foot Orthoses on the Dynamics of Gait in Patients with Early Rheumatoid Arthritis

Authors: Vicki Cameron

Abstract:

Background: Rheumatoid Arthritis (RA) typically effects the feet and about 20% of patients present initially with foot and ankle symptoms. Custom moulded foot orthoses (FO) in the management of foot and ankle problems in RA is well documented in the literature. Off-the-shelf FO are thought to provide an effective alternative to custom moulded FO in patients with RA, however they are not evidence based. Objectives: To determine the effects of off-the-shelf FO on; 1. quality of life (QOL) 2. walking speed 4. peak plantar pressure in the forefoot (PPPft) Methods: Thirty-five patients (six male and 29 female) participated in the study from 11/2006 to 07/2008. The age of the patients ranged from 26 to 80 years (mean 52.4 years; standard deviation [SD] 13.3 years). A repeated measures design was used, with patients presenting at baseline, three months and six months. Patients were tested walking barefoot, shod and shod with FO. The type of orthoses used was the Slimflex Plastic ® (Algeos). The Leeds Foot Impact Scale (LFIS) was used to investigate QOL. The Vicon 612 motion analysis system was used to determine the effect of FO on walking speed. The F-scan walkway and in-shoe systems provided information of the effect on PPPft. Ethical approval was obtained on 07/2006. Data was analysed using SPSS version 15.0. Results/Discussion: The LFIS data was analysed with a repeated measures ANOVA. There was a significant improvement in the LFIS score with the use of the FO over the six months (p<0.01). A significant increase in walking speed with the orthoses was observed (p<0.01). Peak plantar pressure in the forefoot was reduced with the FO, as shown by a non-parametric Friedman’s test (chi-square = 55.314, df=2, p<0.05). Conclusion: The results show that off-the-shelf FO are effective in managing foot problems in patients with RA. Patients reported an improved QOL with the orthoses, and further objective measurements were quantified to provide a rationale for this change. Patients demonstrated an increased walking speed, which has been shown to be associated with reduced pain. The FO decreased PPPft which have been reported as a site of pain and ulceration in patients with RA. Salient Clinical Points: Off-the-shelf FO offer an effective alternative to custom moulded FO, and can be dispensed at the chair side. This is crucial in the management of foot problems associated with RA as early intervention is advocated due to the chronic and progressive nature of the disease.

Keywords: podiatry, rheumatoid arthritis, foot orthoses, gait analysis

Procedia PDF Downloads 257
212 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 308
211 The Effects of Inferior Tilt Fixation on a Glenoid Components in Reverse Shoulder-Arthroplasty

Authors: Soo Min Kim, Soo-Won Chae, Soung-Yon Kim, Haea Lee, Ju Yong Kang, Juneyong Lee, Seung-Ho Han

Abstract:

Reverse total shoulder arthroplasty (RTSA) has become an effective treatment option for cuff tear arthropathy and massive, irreparable rotator cuff tears and indications for its use are expanding. Numerous methods for optimal fixation of the glenoid component have been suggested, such as inferior overhang, inferior tilt, to maximize initial fixation and prevent glenoid component loosening. The inferior tilt fixation of a glenoid component has been suggested, which is expected to decrease scapular notching and to improve the stability of a glenoid component fixation in reverse total shoulder arthroplasty. Inferior tilt fixation of the glenoid component has been suggested, which can improve stability and, because it provides the most uniform compressive forces and imparts the least amount of tensile forces and micromotion, reduce the likelihood of mechanical failure. Another study reported that glenoid component inferior tilt improved impingement-free range of motion as well as minimized the scapular notching. Several authors have shown that inferior tilt of a glenoid component reduces scapular notching. However, controversy still exists regarding its importance in the literature. In this study the influence of inferior tilt fixation on the primary stability of a glenoid component has been investigated. Finite element models were constructed from cadaveric scapulae and glenoid components were implanted with neutral and 10° inferior tilts. Most previous biomechanical studies regarding the effect of glenoid component inferior tilt used a solid rigid polyurethane foam or sawbones block, not cadaveric scapulae, to evaluate the stability of the RTSA. Relative micromotions at the bone-glenoid component interface, and the distribution of bone stresses under the glenoid component and around the screws were analyzed and compared between neutral and 10° inferior tilt groups. Contact area between bone and screws and cut surface area of the cancellous bone exposed after reaming of the glenoid have also been investigated because of the fact that cancellous and cortical bone thickness vary depending on the resection level of the inferior glenoid bone. The greater relative micromotion of the bone-glenoid component interface occurred in the 10° inferior tilt group than in the neutral tilt group, especially at the inferior area of the bone-glenoid component interface. Bone stresses under the glenoid component and around the screws were also higher in the 10° inferior tilt group than in the neutral tilt group, especially at the inferior third of the glenoid bone surface under the glenoid component and inferior scapula. Thus inferior tilt fixation of the glenoid component may adversely affect the primary stability and longevity of the reverse total shoulder arthroplasty.

Keywords: finite element analysis, glenoid component, inferior tilt, reverse total shoulder arthroplasty

Procedia PDF Downloads 286
210 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212
209 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 197
208 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes

Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini

Abstract:

Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.

Keywords: covid-19, stroke, virtual reality, rehabilitation

Procedia PDF Downloads 141
207 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 435
206 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics

Procedia PDF Downloads 80
205 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 57
204 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 404
203 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator

Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac

Abstract:

With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.

Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy

Procedia PDF Downloads 213
202 Power Recovery from Waste Air of Mine Ventilation Fans Using Wind Turbines

Authors: Soumyadip Banerjee, Tanmoy Maity

Abstract:

The recovery of power from waste air generated by mine ventilation fans presents a promising avenue for enhancing energy efficiency in mining operations. This abstract explores the feasibility and benefits of utilizing turbine generators to capture the kinetic energy present in waste air and convert it into electrical power. By integrating turbine generator systems into mine ventilation infrastructures, the potential to harness and utilize the previously untapped energy within the waste air stream is realized. This study examines the principles underlying turbine generator technology and its application within the context of mine ventilation systems. The process involves directing waste air from ventilation fans through specially designed turbines, where the kinetic energy of the moving air is converted into rotational motion. This mechanical energy is then transferred to connected generators, which convert it into electrical power. The recovered electricity can be employed for various on-site applications, including powering mining equipment, lighting, and control systems. The benefits of power recovery from waste air using turbine generators are manifold. Improved energy efficiency within the mining environment results in reduced dependence on external power sources and associated cost savings. Additionally, this approach contributes to environmental sustainability by utilizing a previously wasted resource for power generation. Resource conservation is further enhanced, aligning with modern principles of sustainable mining practices. However, successful implementation requires careful consideration of factors such as waste air characteristics, turbine design, generator efficiency, and integration into existing mine infrastructure. Maintenance and monitoring protocols are necessary to ensure consistent performance and longevity of the turbine generator systems. While there is an initial investment associated with equipment procurement, installation, and integration, the long-term benefits of reduced energy costs and environmental impact make this approach economically viable. In conclusion, the recovery of power from waste air from mine ventilation fans using turbine generators offers a tangible solution to enhance energy efficiency and sustainability within mining operations. By capturing and converting the kinetic energy of waste air into usable electrical power, mines can optimize resource utilization, reduce operational costs, and contribute to a greener future for the mining industry.

Keywords: waste to energy, wind power generation, exhaust air, power recovery

Procedia PDF Downloads 33
201 Care as a Situated Universal: Defining Care as a Practical Phenomenology Study

Authors: Amanda Aliende da Matta

Abstract:

This communication presents an aspect of phenomenon selection in an applied hermeneutic phenomenology study on care and vulnerability: the need to consider it as a situated universal. For that, we will first present the study and its methodology. Secondly, we will expose the need to understand phenomena as situation-defined, incorporating feminist thought. In an informatics class for 14 year olds, we explained the exercise: students have to make a 5 slide presentation about a topic of their choice. A does it on streetwear, B on Cristiano Ronaldo, C on Marvel, but J did it on Down Syndrome. Introducing it to the class, J explains the physical and cognitive differences caused by trisomy; when asked to explain it further, he says: "they are angels, teacher," and shows us a poster on his cellphone that says: if you laugh at a different child he will laugh with you because his innocence outweighs your ignorance. The anecdote shows, better than any theoretical explanation, something that some vulnerable people have; something beautiful and special but difficult to define. Let's call this something caring. The research has the main objective of accounting for the experience of caregiving in vulnerability, and it will be carried out with Applied Hermeneutic Phenomenology (AHP). The method's objective is to investigate the lived human experience in its pre-reflexive dimension to know its meaning structures. Contrary to other research methods, AHP does not produce theory about a specific context but seeks the meaning of the lived experience, in its characteristic of human experience. However, it is necessary that we understand care as defined in a concrete situation. We cannot start the research with an a priori definitive concept of care, or we would fall into the mistake of closing ourselves to only what we already know, as explained by Levinas. We incorporate, then, the notion of situated universals. Loyal to phenomenology, the definition of the phenomenon should start with an investigation of the word's etymology: the word cura, in its etymological root, means care. And care comes from the Latin word cogitātus/cōgĭto, which means "to pursue something in mind" and "to consider thoroughly." The verb cōgĭto, meanwhile, is composed of co- (altogether) and agitare (to deal with or think committedly about something, to concern oneself with) / ăgĭto (to set in motion, to move). Care, therefore, has in its origin a meditation on something, a concern about something, a verb that has a sense of action and movement. To care is to act out of concern for something/someone. This etymology, though, is not the final definition of the phenomenon, but only its skeleton. It needs to be embodied in the concrete situation to become a possible lived experience. And that means that the lived experience descriptions (LEDs) should be selected by taking into consideration how and if care was engendered in that concrete experience. Defining the phenomenon has to take into consideration situated knowledge.

Keywords: applied hermeneutic phenomenology, care ethics, hermeneutics, phenomenology, situated universalism

Procedia PDF Downloads 88
200 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)

Authors: Ali Pourkazemi

Abstract:

The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.

Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies

Procedia PDF Downloads 69
199 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 350
198 Hydrogen Sulfide Releasing Ibuprofen Derivative Can Protect Heart After Ischemia-Reperfusion

Authors: Virag Vass, Ilona Bereczki, Erzsebet Szabo, Nora Debreczeni, Aniko Borbas, Pal Herczegh, Arpad Tosaki

Abstract:

Hydrogen sulfide (H₂S) is a toxic gas, but it is produced by certain tissues in a small quantity. According to earlier studies, ibuprofen and H₂S has a protective effect against damaging heart tissue caused by ischemia-reperfusion. Recently, we have been investigating the effect of a new water-soluble H₂S releasing ibuprofen molecule administered after artificially generated ischemia-reperfusion on isolated rat hearts. The H₂S releasing property of the new ibuprofen derivative was investigated in vitro in medium derived from heart endothelial cell isolation at two concentrations. The ex vivo examinations were carried out on rat hearts. Rats were anesthetized with an intraperitoneal injection of ketamine, xylazine, and heparin. After thoracotomy, hearts were excised and placed into ice-cold perfusion buffer. Perfusion of hearts was conducted in Langendorff mode via the cannulated aorta. In our experiments, we studied the dose-effect of the H₂S releasing molecule in Langendorff-perfused hearts with the application of gradually increasing concentration of the compound (0- 20 µM). The H₂S releasing ibuprofen derivative was applied before the ischemia for 10 minutes. H₂S concentration was measured with an H₂S detecting electrochemical sensor from the coronary effluent solution. The 10 µM concentration was chosen for further experiments when the treatment with this solution was occurred after the ischemia. The release of H₂S is occurred by the hydrolyzing enzymes that are present in the heart endothelial cells. The protective effect of the new H₂S releasing ibuprofen molecule can be confirmed by the infarct sizes of hearts using the Triphenyl-tetrazolium chloride (TTC) staining method. Furthermore, we aimed to define the effect of the H₂S releasing ibuprofen derivative on autophagic and apoptotic processes in damaged hearts after investigating the molecular markers of these events by western blotting and immunohistochemistry techniques. Our further studies will include the examination of LC3I/II, p62, Beclin1, caspase-3, and other apoptotic molecules. We hope that confirming the protective effect of new H₂S releasing ibuprofen molecule will open a new possibility for the development of more effective cardioprotective agents with exerting fewer side effects. Acknowledgment: This study was supported by the grants of NKFIH- K-124719 and the European Union and the State of Hungary co- financed by the European Social Fund in the framework of GINOP- 2.3.2-15-2016-00043.

Keywords: autophagy, hydrogen sulfide, ibuprofen, ischemia, reperfusion

Procedia PDF Downloads 140
197 Interactive Garments: Flexible Technologies for Textile Integration

Authors: Anupam Bhatia

Abstract:

Upon reviewing the literature and the pragmatic work done in the field of E- textiles, it is observed that the applications of wearable technologies have found a steady growth in the field of military, medical, industrial, sports; whereas fashion is at a loss to know how to treat this technology and bring it to market. The purpose of this paper is to understand the practical issues of integration of electronics in garments; cutting patterns for mass production, maintaining the basic properties of textiles and daily maintenance of garments that hinder the wide adoption of interactive fabric technology within Fashion and leisure wear. To understand the practical hindrances an experimental and laboratory approach is taken. “Techno Meets Fashion” has been an interactive fashion project where sensor technologies have been embedded with textiles that result in set of ensembles that are light emitting garments, sound sensing garments, proximity garments, shape memory garments etc. Smart textiles, especially in the form of textile interfaces, are drastically underused in fashion and other lifestyle product design. Clothing and some other textile products must be washable, which subjects to the interactive elements to water and chemical immersion, physical stress, and extreme temperature. The current state of the art tends to be too fragile for this treatment. The process for mass producing traditional textiles becomes difficult in interactive textiles. As cutting patterns from larger rolls of cloth and sewing them together to make garments breaks and reforms electronic connections in an uncontrolled manner. Because of this, interactive fabric elements are integrated by hand into textiles produced by standard methods. The Arduino has surely made embedding electronics into textiles much easier than before; even then electronics are not integral to the daily wear garments. Soft and flexible interfaces of MEMS (micro sensors and Micro actuators) can be an option to make this possible by blending electronics within E-textiles in a way that’s seamless and still retains functions of the circuits as well as the garment. Smart clothes, which offer simultaneously a challenging design and utility value, can be only mass produced if the demands of the body are taken care of i.e. protection, anthropometry, ergonomics of human movement, thermo- physiological regulation.

Keywords: ambient intelligence, proximity sensors, shape memory materials, sound sensing garments, wearable technology

Procedia PDF Downloads 393
196 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 150
195 Design and Control of a Brake-by-Wire System Using a Permanent Magnet Synchronous Motor

Authors: Daniel S. Gamba, Marc Sánchez, Javier Pérez, Juan J. Castillo, Juan A. Cabrera

Abstract:

The conventional hydraulic braking system operates through the activation of a master cylinder and solenoid valves that distribute and regulate brake fluid flow, adjusting the pressure at each wheel to prevent locking during sudden braking. However, in recent years, there has been a significant increase in the integration of electronic units into various vehicle control systems. In this context, one of the technologies most recently researched is the Brake-by-wire system, which combines electronic, hydraulic, and mechanical technologies to manage braking. This proposal introduces the design and control of a Brake-by-wire system, which will be part of a fully electric and teleoperated vehicle. This vehicle will have independent four-wheel drive, braking, and steering systems. The vehicle will be operated by embedded controllers programmed into a Speedgoat test system, which allows programming through Simulink and real-time capabilities. The braking system comprises all mechanical and electrical components, a vehicle control unit (VCU), and an electronic control unit (ECU). The mechanical and electrical components include a permanent magnet synchronous motor from Odrive and its inverter, the mechanical transmission system responsible for converting torque into pressure, and the hydraulic system that transmits this pressure to the brake caliper. The VCU is responsible for controlling the pressure and communicates with the other components through the CAN protocol, minimizing response times. The ECU, in turn, transmits the information obtained by a sensor installed in the caliper to the central computer, enabling the control loop to continuously regulate pressure by controlling the motor's speed and current. To achieve this, tree controllers are used, operating in a nested configuration for effective control. Since the computer allows programming in Simulink, a digital model of the braking system has been developed in Simscape, which makes it possible to reproduce different operating conditions, faithfully simulate the performance of alternative brake control systems, and compare the results with data obtained in various real tests. These tests involve evaluating the system's response to sinusoidal and square wave inputs at different frequencies, with the results compared to those obtained from conventional braking systems.

Keywords: braking, CAN protocol, permanent magnet motor, pressure control

Procedia PDF Downloads 19
194 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 269
193 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 137
192 Multi-Analyte Indium Gallium Zinc Oxide-Based Dielectric Electrolyte-Insulator-Semiconductor Sensing Membranes

Authors: Chyuan Haur Kao, Hsiang Chen, Yu Sheng Tsai, Chen Hao Hung, Yu Shan Lee

Abstract:

Dielectric electrolyte-insulator-semiconductor sensing membranes-based biosensors have been intensively investigated because of their simple fabrication, low cost, and fast response. However, to enhance their sensing performance, it is worthwhile to explore alternative materials, distinct processes, and novel treatments. An ISFET can be viewed as a variation of MOSFET with the dielectric oxide layer as the sensing membrane. Then, modulation on the work function of the gate caused by electrolytes in various ion concentrations could be used to calculate the ion concentrations. Recently, owing to the advancement of CMOS technology, some high dielectric materials substrates as the sensing membranes of electrolyte-insulator-semiconductor (EIS) structures. The EIS with a stacked-layer of SiO₂ layer between the sensing membrane and the silicon substrate exhibited a high pH sensitivity and good long-term stability. IGZO is a wide-bandgap (~3.15eV) semiconductor of the III-VI semiconductor group with several preferable properties, including good transparency, high electron mobility, wide band gap, and comparable with CMOS technology. IGZO was sputtered by reactive radio frequency (RF) on a p-type silicon wafer with various gas ratios of Ar:O₂ and was treated with rapid thermal annealing in O₂ ambient. The sensing performance, including sensitivity, hysteresis, and drift rate was measured and XRD, XPS, and AFM analyses were also used to study the material properties of the IGZO membrane. Moreover, IGZO was used as a sensing membrane in dielectric EIS bio-sensor structures. In addition to traditional pH sensing capability, detection for concentrations of Na+, K+, urea, glucose, and creatinine was performed. Moreover, post rapid thermal annealing (RTA) treatment was confirmed to improve the material properties and enhance the multi-analyte sensing capability for various ions or chemicals in solutions. In this study, the IGZO sensing membrane with annealing in O₂ ambient exhibited a higher sensitivity, higher linearity, higher H+ selectivity, lower hysteresis voltage and lower drift rate. Results indicate that the IGZO dielectric sensing membrane on the EIS structure is promising for future bio-medical device applications.

Keywords: dielectric sensing membrane, IGZO, hydrogen ion, plasma, rapid thermal annealing

Procedia PDF Downloads 251
191 Hibiscus Sabdariffa Extracts: A Sustainable and Eco-Friendly Resource for Multifunctional Cellulosic Fibers

Authors: Mohamed Rehan, Gamil E. Ibrahim, Mohamed S. Abdel-Aziz, Shaimaa R. Ibrahim, Tawfik A. Khattab

Abstract:

The utilization of natural products in finishing textiles toward multifunctional applications without side effects is an extremely motivating goal. Hibiscus sabdariffa usually has been used for many traditional medicine applications. To develop an additional use for Hibiscus sabdariffa, an extraction of bioactive compounds from Hibiscus sabdariffa followed by finishing on cellulosic fibers was designed to cleaner production of the value-added textiles fibers with multifunctional applications. The objective of this study is to explore, identify, and evaluate the bioactive compound extracted from Hibiscus sabdariffa by different solvent via ultrasonic technique as a potential eco-friendly agent for multifunctional cellulosic fabrics via two approaches. In the first approach, Hibiscus sabdariffa extract was used as a source of sustainable eco-friendly for simultaneous coloration and multi-finishing of cotton fabrics via in situ incorporations of nanoparticles (silver and metal oxide). In the second approach, the micro-capsulation of Hibiscus sabdariffa extracts was followed by coating onto cotton gauze to introduce multifunctional healthcare applications. The effect of the solvent type was accelerated by ultrasonic on the phytochemical, antioxidant, and volatile compounds of Hibiscus sabdariffa. The surface morphology and elemental content of the treated fabrics were explored using Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), and energy-dispersive X-ray spectroscopy (EDX). The multifunctional properties of treated fabrics, including coloration, sensor properties and protective properties against pathogenic microorganisms and UV radiation as well as wound healing property were evaluated. The results showed that the water, as well as ethanol/water, was selected as a solvent for the extraction of natural compounds from Hibiscus Sabdariffa with high in extract yield, total phenolic contents, flavonoid contents, and antioxidant activity. These natural compounds were utilized to enhance cellulosic fibers functionalization by imparting faint/dark red color, antimicrobial against different organisms, and antioxidants as well as UV protection properties. The encapsulation of Hibiscus Sabdariffa extracts, as well as wound healing, is under consideration and evaluation. As a result, the current study presents a sustainable and eco-friendly approach to design cellulosic fabrics for multifunctional medical and healthcare applications.

Keywords: cellulosic fibers, Hibiscus sabdariffa extract, multifunctional application, nanoparticles

Procedia PDF Downloads 145
190 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide

Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh

Abstract:

Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.

Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration

Procedia PDF Downloads 143
189 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study

Authors: Samantha Meyerholz

Abstract:

MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.

Keywords: engineering, ergonomics, healthcare, lean

Procedia PDF Downloads 223
188 Work Related Musculoskeletal Disorder: A Case Study of Office Computer Users in Nigerian Content Development and Monitoring Board, Yenagoa, Bayelsa State, Nigeria

Authors: Tamadu Perry Egedegu

Abstract:

Rapid growth in the use of electronic data has affected both the employee and work place. Our experience shows that jobs that have multiple risk factors have a greater likelihood of causing Work Related Musculoskeletal Disorder (WRMSDs), depending on the duration, frequency and/or magnitude of exposure to each. The study investigated musculoskeletal disorder among office workers. Thus, it is important that ergonomic risk factors be considered in light of their combined effect in causing or contributing to WRMSDs. Fast technological growth in the use of electronic system; have affected both workers and the work environment. Awkward posture and long hours in front of these visual display terminals can result in work-related musculoskeletal disorders (WRMSD). The study shall contribute to the awareness creation on the causes and consequences of WRMSDs due to lack of ergonomics training. The study was conducted using an observational cross-sectional design. A sample of 109 respondents was drawn from the target population through purposive sampling method. The sources of data were both primary and secondary. Primary data were collected through questionnaires and secondary data were sourced from journals, textbooks, and internet materials. Questionnaires were the main instrument for data collection and were designed in a YES or NO format according to the study objectives. Content validity approval was used to ensure that the variables were adequately covered. The reliability of the instrument was done through test-retest method, yielding a reliability index at 0.84. The data collected from the field were analyzed with a descriptive statistics of chart, percentage and mean. The study found that the most affected body regions were the upper back, followed by the lower back, neck, wrist, shoulder and eyes, while the least affected body parts were the knee calf and the ankle. Furthermore, the prevalence of work-related 'musculoskeletal' malfunctioning was linked with long working hours (6 - 8 hrs.) per day, lack of back support on their seats, glare on the monitor, inadequate regular break, repetitive motion of the upper limbs, and wrist when using the computer. Finally, based on these findings some recommendations were made to reduce the prevalent of WRMSDs among office workers.

Keywords: work related musculoskeletal disorder, Nigeria, office computer users, ergonomic risk factor

Procedia PDF Downloads 241
187 Long-Term Foam Roll Intervention Study of the Effects on Muscle Performance and Flexibility

Authors: T. Poppendieker

Abstract:

A new innovative tool for self-myofascial release is widely and increasingly used among athletes of various sports. The application of the foam roll is suggested to improve muscle performance and flexibility. Attempts to examine acute and somewhat long term effects of either have been conducted over the past ten years. However, the results of muscle performance have been inconsistent. It is suggested that regular use over a long period of time results in a different, muscle performance improving outcome. This study examines long-term effects of regular foam rolling combined with a short plyometric routine vs. solely the same plyometric routine on muscle performance and flexibility over a period of six weeks. Results of counter movement jump (CMJ), squat jump (SJ), and isometric maximal force (IMF) of a 90° horizontal squat in a leg-press will serve as parameters for muscle performance. Data on the range of motion (ROM) of the sit and reach test will be used as a parameter for the flexibility assessment. Muscle activation will be measured throughout all tests. Twenty male and twenty female members of a Frankfurt area fitness center chain (7.11) with an average age of 25 years will be recruited. Women and men will be randomly assigned to a foam roll (FR) and a control group. All participants will practice their assigned routine three times a week over the period of six weeks. Tests on CMJ, SJ, IMF, and ROM will be taken before and after the intervention period. The statistic software program SPSS 22 will be used to analyze the data of CMJ, SJ, IMF, and ROM under consideration of muscle activation by a 2 x 2 x 2 (time of measurement x gender x group) analysis of variance with repeated measures and dependent t-test analysis of pre- and post-test. The alpha level for statistic significance will be set at p ≤ 0.05. It is hypothesized that a significant difference in outcome based on gender differences in all four tests will be observed. It is further hypothesized that both groups may show significant improvements in their performance in the CMJ and SJ after the six-week period. However, the FR group is hypothesized to achieve a higher improvement in the two jump tests. Moreover, the FR group may increase IMF as well as flexibility, whereas the control group may not show likewise progress. The results of this study are crucial for the understanding of long-term effects of regular foam roll application. The collected information on the matter may help to motivate the incorporation of foam rolling into training routines, in order to improve athletic performances.

Keywords: counter movement jump, foam rolling, isometric maximal force, long term effects, self-myofascial release, squat jump

Procedia PDF Downloads 286
186 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
185 The Principle of a Thought Formation: The Biological Base for a Thought

Authors: Ludmila Vucolova

Abstract:

The thought is a process that underlies consciousness and cognition and understanding its origin and processes is a longstanding goal of many academic disciplines. By integrating over twenty novel ideas and hypotheses of this theoretical proposal, we can speculate that thought is an emergent property of coded neural events, translating the electro-chemical interactions of the body with its environment—the objects of sensory stimulation, X, and Y. The latter is a self- generated feedback entity, resulting from the arbitrary pattern of the motion of a body’s motor repertory (M). A culmination of these neural events gives rise to a thought: a state of identity between an observed object X and a symbol Y. It manifests as a “state of awareness” or “state of knowing” and forms our perception of the physical world. The values of the variables of a construct—X (object), S1 (sense for the perception of X), Y (object), S2 (sense for perception of Y), and M (motor repertory that produces Y)—will specify the particular conscious percept at any given time. The proposed principle of interaction between the elements of a construct (X, Y, S1, S2, M) is universal and applies for all modes of communication (normal, deaf, blind, deaf and blind people) and for various language systems (Chinese, Italian, English, etc.). The particular arrangement of modalities of each of the three modules S1 (5 of 5), S2 (1 of 3), and M (3 of 3) defines a specific mode of communication. This multifaceted paradigm demonstrates a predetermined pattern of relationships between X, Y, and M that passes from generation to generation. The presented analysis of a cognitive experience encompasses the key elements of embodied cognition theories and unequivocally accords with the scientific interpretation of cognition as the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses, and cognition means thinking and awareness. By assembling the novel ideas presented in twelve sections, we can reveal that in the invisible “chaos”, there is an order, a structure with landmarks and principles of operations and mental processes (thoughts) are physical and have a biological basis. This innovative proposal explains the phenomenon of mental imagery; give the first insight into the relationship between mental states and brain states, and support the notion that mind and body are inseparably connected. The findings of this theoretical proposal are supported by the current scientific data and are substantiated by the records of the evolution of language and human intelligence.

Keywords: agent, awareness, cognitive, element, experience, feedback, first person, imagery, language, mental, motor, object, sensory, symbol, thought

Procedia PDF Downloads 384