Search results for: thermal cycling machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6265

Search results for: thermal cycling machine

325 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators

Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari

Abstract:

NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.

Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force

Procedia PDF Downloads 170
324 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 272
323 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 132
322 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 197
321 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
320 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 97
319 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran

Authors: Mahshid Arabi

Abstract:

In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.

Keywords: facial recognition, FaceMatch software, Iran, university entrance exam

Procedia PDF Downloads 3
318 Concepts of the Covid-19 Pandemic and the Implications of Vaccines for Health Security in Nigeria and Diasporas

Authors: Wisdom Robert Duruji

Abstract:

The outbreak of SARS-CoV-2 serotype infection was recorded in January 2020 in Wuhan City, Hubei Province, China. This study examines the concepts of the COVID-19 pandemic and the implications of vaccines for health security in Nigeria and Diasporas. It challenges the widely accepted assumption that the first case of coronavirus infection in Nigeria was recorded on February 27th, 2020, in Lagos. The study utilizes a range of research methods to achieve its objectives. These include the double-layered culture technique, literature review, website knowledge, Google search, news media information, academic journals, fieldwork, and on-site observations. These diverse methods allow for a comprehensive analysis of the concepts and the implications being studied. The study finds that coronavirus infection can be asymptomatic; it may be the antigenicity of the leukocytes (white blood cells), which produce immunogenic hapten or interferons (α, β and γ) that fight infectious parasites, was an immune response that prevented severe virulence in healthy individuals; the reason healthy patients of coronavirus infection in Nigeria naturally recovered after two to three weeks of on-set of infection and test negative. However, the fatality data from the Nigerian Centre for Disease Control (NCDC) is incorrect in this study’s finding; it perused that the fatalities were primarily due to underlying ailments, hunger, and malnutrition in debilitated, comorbid, or compromised patients. This study concluded that the kits and Polymerase Chain Reaction (PCR) machine currently used by the Nigerian Centre for Disease Control (NCDC) in testing and confirming COVID-19 in Nigeria is not ideal; it is programmed and negates separating the strain to its specific serotypes amongst its genera coronavirus, and family Coronaviridae; and might have confirmed patients with the symptoms of febrile caused by cough, catarrh, typhoid and malaria parasites as Covid-19 positive. Therefore, it is recommended that the coronavirus species infected in Nigeria are opportunistic parasites that thrive in human immuno-suppressed conditions like the herpesvirus; it cannot be eradicated by vaccines; the only virucides are interferons, immunoglobulins, and probably synthetic antiviral guanosine drugs like copegus or ribavirin. The findings emphasized that COVID-19 is not the primary pandemic disease in Nigeria; the lockdown was a mirage and not necessary; but rather, pandemic diseases in Nigeria are corruption, nepotism, hunger, and malnutrition caused by ineptitude in governance, religious dichotomy, and ethnic conflicts.

Keywords: coronavirus, corruption, Covid-19 pandemic, lock-down, Nigeria, vaccine

Procedia PDF Downloads 40
317 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study

Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart

Abstract:

Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.

Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning

Procedia PDF Downloads 71
316 Syntheses of Anionic Poly(urethanes) with Imidazolium, Phosphonium, and Ammonium as Counter-cations and Their Evaluation for CO2 Separation

Authors: Franciele L. Bernard, Felipe Dalla Vecchia, Barbara B. Polesso, Jose A. Donato, Marcus Seferin, Rosane Ligabue, Jailton F. do Nascimento, Sandra Einloft

Abstract:

The increasing level of carbon dioxide concentration in the atmosphere related to fossil fuels processing and utilization are contributing to global warming phenomena considerably. Carbon capture and storage (CCS) technologies appear as one of the key technologies to reduce CO2 emissions mitigating the effects of climate change. Absorption using amines solutions as solvents have been extensively studied and used in industry for decades. However, solvent degradation and equipment corrosion are two of the main problems in this process. Poly (ionic liquid) (PIL) is considered as a promising material for CCS technology, potentially more environmentally friendly and lesser energy demanding than traditional material. PILs possess a unique combination of ionic liquids (ILs) features, such as affinity for CO2, thermal and chemical stability and adjustable properties, coupled with the intrinsic properties of the polymer. This study investigated new Poly (ionic liquid) (PIL) based on polyurethanes with different ionic liquids cations and its potential for CO2 capture. The PILs were synthesized by the addition of diisocyante to a difunctional polyol, followed by an exchange reaction with the ionic Liquids 1-butyl-3-methylimidazolium chloride (BMIM Cl); tetrabutylammonium bromide (TBAB) and tetrabutylphosphonium bromide (TBPB). These materials were characterized by Fourier transform infrared spectroscopy (FTIR), Proton Nuclear Magnetic Resonance (1H-NMR), Atomic force microscopy (AFM), Tensile strength analysis, Field emission scanning electron microscopy (FESEM), Thermogravimetric analysis (TGA), Differential scanning calorimetry (DSC). The PILs CO2 sorption capacity were gravimetrically assessed in a Magnetic Suspension Balance (MSB). It was found that the ionic liquids cation influences in the compounds properties as well as in the CO2 sorption. The best result for CO2 sorption (123 mgCO2/g at 30 bar) was obtained for the PIL (PUPT-TBA). The higher CO2 sorption in PUPT-TBA is probably linked to the fact that the tetraalkylammonium cation having a higher positive density charge can have a stronger interaction with CO2, while the imidazolium charge is delocalized. The comparative CO2 sorption values of the PUPT-TBA with different ionic liquids showed that this material has greater capacity for capturing CO2 when compared to the ILs even at higher temperature. This behavior highlights the importance of this study, as the poly (urethane) based PILs are cheap and versatile materials.

Keywords: capture, CO2, ionic liquids, ionic poly(urethane)

Procedia PDF Downloads 214
315 Impact of Boundary Conditions on the Behavior of Thin-Walled Laminated Column with L-Profile under Uniform Shortening

Authors: Jaroslaw Gawryluk, Andrzej Teter

Abstract:

Simply supported angle columns subjected to uniform shortening are tested. The experimental studies are conducted on a testing machine using additional Aramis and the acoustic emission system. The laminate samples are subjected to axial uniform shortening. The tested columns are loaded with the force values from zero to the maximal load destroying the L-shaped column, which allowed one to observe the column post-buckling behavior until its collapse. Laboratory tests are performed at a constant velocity of the cross-bar equal to 1 mm/min. In order to eliminate stress concentrations between sample and support, flexible pads are used. Analyzed samples are made with carbon-epoxy laminate using the autoclave method. The configurations of laminate layers are: [60,0₂,-60₂,60₃,-60₂,0₃,-60₂,0,60₂]T, where direction 0 is along the length of the profile. Material parameters of laminate are: Young’s modulus along the fiber direction - 170GPa, Young’s modulus along the fiber transverse direction - 7.6GPa, shear modulus in-plane - 3.52GPa, Poisson’s ratio in-plane - 0.36. The dimensions of all columns are: length-300 mm, thickness-0.81mm, width of the flanges-40mm. Next, two numerical models of the column with and without flexible pads are developed using the finite element method in Abaqus software. The L-profile laminate column is modeled using the S8R shell elements. The layup-ply technique is used to define the sequence of the laminate layers. However, the model of grips is made of the R3D4 discrete rigid elements. The flexible pad is consists of the C3D20R type solid elements. In order to estimate the moment of the first laminate layer damage, the following initiation criteria were applied: maximum stress criterion, Tsai-Hill, Tsai-Wu, Azzi-Tsai-Hill, and Hashin criteria. The best compliance of results was observed for the Hashin criterion. It was found that the use of the pad in the numerical model significantly influences the damage mechanism. The model without pads characterized a much more stiffness, as evidenced by a greater bifurcation load and damage initiation load in all analyzed criteria, lower shortening, and less deflection of the column in its center than the model with flexible pads. Acknowledgment: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: angle column, compression, experiment, FEM

Procedia PDF Downloads 184
314 Mechanical Properties of Carbon Fibre Reinforced Thermoplastic Composites Consisting of Recycled Carbon Fibres and Polyamide 6 Fibres

Authors: Mir Mohammad Badrul Hasan, Anwar Abdkader, Chokri Cherif

Abstract:

With the increasing demand and use of carbon fibre reinforced composites (CFRC), disposal of the carbon fibres (CF) and end of life composite parts is gaining tremendous importance on the issue especially of sustainability. Furthermore, a number of processes (e. g. pyrolysis, solvolysis, etc.) are available currently to obtain recycled CF (rCF) from end-of-life CFRC. Since the CF waste or rCF are neither allowed to be thermally degraded nor landfilled (EU Directive 1999/31/EC), profitable recycling and re-use concepts are urgently necessary. Currently, the market for materials based on rCF mainly consists of random mats (nonwoven) made from short fibres. The strengths of composites that can be achieved from injection-molded components and from nonwovens are between 200-404 MPa and are characterized by low performance and suitable for non-structural applications such as in aircraft and vehicle interiors. On the contrary, spinning rCF to yarn constructions offers good potential for higher CFRC material properties due to high fibre orientation and compaction of rCF. However, no investigation is reported till yet on the direct comparison of the mechanical properties of thermoplastic CFRC manufactured from virgin CF filament yarn and spun yarns from staple rCF. There is a lack of understanding on the level of performance of the composites that can be achieved from hybrid yarns consisting of rCF and PA6 fibres. In this drop back, extensive research works are being carried out at the Textile Machinery and High-Performance Material Technology (ITM) on the development of new thermoplastic CFRC from hybrid yarns consisting of rCF. For this purpose, a process chain is developed at the ITM starting from fibre preparation to hybrid yarns manufacturing consisting of staple rCF by mixing with thermoplastic fibres. The objective is to apply such hybrid yarns for the manufacturing of load bearing textile reinforced thermoplastic CFRCs. In this paper, the development of innovative multi-component core-sheath hybrid yarn structures consisting of staple rCF and polyamide 6 (PA 6) on a DREF-3000 friction spinning machine is reported. Furthermore, Unidirectional (UD) CFRCs are manufactured from the developed hybrid yarns, and the mechanical properties of the composites such as tensile and flexural properties are analyzed. The results show that the UD composite manufactured from the developed hybrid yarns consisting of staple rCF possesses approximately 80% of the tensile strength and E-module to those produced from virgin CF filament yarn. The results show a huge potential of the DREF-3000 friction spinning process to develop composites from rCF for high-performance applications.

Keywords: recycled carbon fibres, hybrid yarn, friction spinning, thermoplastic composite

Procedia PDF Downloads 230
313 Thermoplastic-Intensive Battery Trays for Optimum Electric Vehicle Battery Pack Performance

Authors: Dinesh Munjurulimana, Anil Tiwari, Tingwen Li, Carlos Pereira, Sreekanth Pannala, John Waters

Abstract:

With the rapid transition to electric vehicles (EVs) across the globe, car manufacturers are in need of integrated and lightweight solutions for the battery packs of these vehicles. An integral part of a battery pack is the battery tray, which constitutes a significant portion of the pack’s overall weight. Based on the functional requirements, cost targets, and packaging space available, a range of materials –from metals, composites, and plastics– are often used to develop these battery trays. This paper considers the design and development of integrated thermoplastic-intensive battery trays, using the available packaging space from a representative EV battery pack. Presented as a proposed alternative are multiple concepts to integrate several connected systems such as cooling plates and underbody impact protection parts of a multi-piece incumbent battery pack. The resulting digital prototype was evaluated for several mechanical performance measures such as mechanical shock, drop, crush resistance, modal analysis, and torsional stiffness. The performance of this alternative design is then compared with the incumbent solution. In addition, insights are gleaned into how these novel approaches can be optimized to meet or exceed the performance of incumbent designs. Preliminary manufacturing feasibility of the optimal solution using injection molding and other commonly used manufacturing methods for thermoplastics is briefly explained. Then numerical and analytical evaluations are performed to show a representative Pareto front of cost vs. volume of the production parts. The proposed solution is observed to offer weight savings of up to 40% on a component level and part elimination of up to two systems in the battery pack of a typical battery EV while offering the potential to meet the required performance measures highlighted above. These conceptual solutions are also observed to potentially offer secondary benefits such as improved thermal and electrical isolations and be able to achieve complex geometrical features, thus demonstrating the ability to use the complete packaging space available in the vehicle platform considered. The detailed study presented in this paper serves as a valuable reference for researches across the globe working on the development of EV battery packs – especially those with an interest in the potential of employing alternate solutions as part of a mixed-material system to help capture untapped opportunities to optimize performance and meet critical application requirements.

Keywords: thermoplastics, lightweighting, part integration, electric vehicle battery packs

Procedia PDF Downloads 183
312 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates

Authors: Jiro Takagi

Abstract:

Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.

Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses

Procedia PDF Downloads 205
311 Damage Tolerance of Composites Containing Hybrid, Carbon-Innegra, Fibre Reinforcements

Authors: Armin Solemanifar, Arthur Wilkinson, Kinjalkumar Patel

Abstract:

Carbon fibre (CF) - polymer laminate composites have very low densities (approximately 40% lower than aluminium), high strength and high stiffness but in terms of toughness properties they often require modifications. For example, adding rubbers or thermoplastics toughening agents are common ways of improving the interlaminar fracture toughness of initially brittle thermoset composite matrices. The main aim of this project was to toughen CF-epoxy resin laminate composites using hybrid CF-fabrics incorporating Innegra™ a commercial highly-oriented polypropylene (PP) fibre, in which more than 90% of its crystal orientation is parallel to the fibre axis. In this study, the damage tolerance of hybrid (carbon-Innegra, CI) composites was investigated. Laminate composites were produced by resin-infusion using: pure CF fabric; fabrics with different ratios of commingled CI, and two different types of pure Innegra fabrics (Innegra 1 and Innegra 2). Dynamic mechanical thermal analysis (DMTA) was used to measure the glass transition temperature (Tg) of the composite matrix and values of flexural storage modulus versus temperature. Mechanical testing included drop-weight impact, compression-after-impact (CAI), and interlaminar (short-beam) shear strength (ILSS). Ultrasonic C-Scan imaging was used to determine the impact damage area and scanning electron microscopy (SEM) to observe the fracture mechanisms that occur during failure of the composites. For all composites, 8 layers of fabrics were used with a quasi-isotropic sequence of [-45°, 0°, +45°, 90°]s. DMTA showed the Tg of all composites to be approximately same (123 ±3°C) and that flexural storage modulus (before the onset of Tg) was the highest for the pure CF composite while the lowest were for the Innegra 1 and 2 composites. Short-beam shear strength of the commingled composites was higher than other composites, while for Innegra 1 and 2 composites only inelastic deformation failure was observed during the short-beam test. During impact, the Innegra 1 composite withstood up to 40 J without any perforation while for the CF perforation occurred at 10 J. The rate of reduction in compression strength upon increasing the impact energy was lowest for the Innegra 1 and 2 composites, while CF showed the highest rate. On the other hand, the compressive strength of the CF composite was highest of all the composites at all impacted energy levels. The predominant failure modes for Innegra composites observed in cross-sections of fractured specimens were fibre pull-out, micro-buckling, and fibre plastic deformation; while fibre breakage and matrix delamination were a major failure observed in the commingled composites due to the more brittle behaviour of CF. Thus, Innegra fibres toughened the CF composites but only at the expense of reducing compressive strength.

Keywords: hybrid composite, thermoplastic fibre, compression strength, damage tolerance

Procedia PDF Downloads 274
310 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer

Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz

Abstract:

Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.

Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions

Procedia PDF Downloads 123
309 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 126
308 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches

Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm

Abstract:

Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.

Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing

Procedia PDF Downloads 135
307 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 38
306 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 81
305 Experimental Study of Vibration Isolators Made of Expanded Cork Agglomerate

Authors: S. Dias, A. Tadeu, J. Antonio, F. Pedro, C. Serra

Abstract:

The goal of the present work is to experimentally evaluate the feasibility of using vibration isolators made of expanded cork agglomerate. Even though this material, also known as insulation cork board (ICB), has mainly been studied for thermal and acoustic insulation purposes, it has strong potential for use in vibration isolation. However, the adequate design of expanded cork blocks vibration isolators will depend on several factors, such as excitation frequency, static load conditions and intrinsic dynamic behavior of the material. In this study, transmissibility tests for different static and dynamic loading conditions were performed in order to characterize the material. Since the material’s physical properties can influence the vibro-isolation performance of the blocks (in terms of density and thickness), this study covered four mass density ranges and four block thicknesses. A total of 72 expanded cork agglomerate specimens were tested. The test apparatus comprises a vibration exciter connected to an excitation mass that holds the test specimen. The test specimens under characterization were loaded successively with steel plates in order to obtain results for different masses. An accelerometer was placed at the top of these masses and at the base of the excitation mass. The test was performed for a defined frequency range, and the amplitude registered by the accelerometers was recorded in time domain. For each of the signals (signal 1- vibration of the excitation mass, signal 2- vibration of the loading mass) a fast Fourier transform (FFT) was applied in order to obtain the frequency domain response. For each of the frequency domain signals, the maximum amplitude reached was registered. The ratio between the amplitude (acceleration) of signal 2 and the amplitude of signal 1, allows the calculation of the transmissibility for each frequency. Repeating this procedure allowed us to plot a transmissibility curve for a certain frequency range. A number of transmissibility experiments were performed to assess the influence of changing the mass density and thickness of the expanded cork blocks and the experimental conditions (static load and frequency of excitation). The experimental transmissibility tests performed in this study showed that expanded cork agglomerate blocks are a good option for mitigating vibrations. It was concluded that specimens with lower mass density and larger thickness lead to better performance, with higher vibration isolation and a larger range of isolated frequencies. In conclusion, the study of the performance of expanded cork agglomerate blocks presented herein will allow for a more efficient application of expanded cork vibration isolators. This is particularly relevant since this material is a more sustainable alternative to other commonly used non-environmentally friendly products, such as rubber.

Keywords: expanded cork agglomerate, insulation cork board, transmissibility tests, sustainable materials, vibration isolators

Procedia PDF Downloads 315
304 Poly(propylene fumarate) Copolymers with Phosphonic Acid-based Monomers Designed as Bone Tissue Engineering Scaffolds

Authors: Görkem Cemali̇, Avram Aruh, Gamze Torun Köse, Erde Can ŞAfak

Abstract:

In order to heal bone disorders, the conventional methods which involve the use of autologous and allogenous bone grafts or permanent implants have certain disadvantages such as limited supply, disease transmission, or adverse immune response. A biodegradable material that acts as structural support to the damaged bone area and serves as a scaffold that enhances bone regeneration and guides bone formation is one desirable solution. Poly(propylene fumarate) (PPF) which is an unsaturated polyester that can be copolymerized with appropriate vinyl monomers to give biodegradable network structures, is a promising candidate polymer to prepare bone tissue engineering scaffolds. In this study, hydroxyl-terminated PPF was synthesized and thermally cured with vinyl phosphonic acid (VPA) and diethyl vinyl phosphonate (VPES) in the presence of radical initiator benzoyl peroxide (BP), with changing co-monomer weight ratios (10-40wt%). In addition, the synthesized PPF was cured with VPES comonomer at body temperature (37oC) in the presence of BP initiator, N, N-Dimethyl para-toluidine catalyst and varying amounts of Beta-tricalcium phosphate (0-20 wt% ß-TCP) as filler via radical polymerization to prepare composite materials that can be used in injectable forms. Thermomechanical properties, compressive properties, hydrophilicity and biodegradability of the PPF/VPA and PPF/VPES copolymers were determined and analyzed with respect to the copolymer composition. Biocompatibility of the resulting polymers and their composites was determined by the MTS assay and osteoblast activity was explored with von kossa, alkaline phosphatase and osteocalcin activity analysis and the effects of VPA and VPES comonomer composition on these properties were investigated. Thermally cured PPF/VPA and PPF/VPES copolymers with different compositions exhibited compressive modulus and strength values in the wide range of 10–836 MPa and 14–119 MPa, respectively. MTS assay studies showed that the majority of the tested compositions were biocompatible and the overall results indicated that PPF/VPA and PPF/VPES network polymers show significant potential for applications as bone tissue engineering scaffolds where varying PPF and co-monomer ratio provides adjustable and controllable properties of the end product. The body temperature cured PPF/VPES/ß-TCP composites exhibited significantly lower compressive modulus and strength values than the thermal cured PPF/VPES copolymers and were therefore found to be useful as scaffolds for cartilage tissue engineering applications.

Keywords: biodegradable, bone tissue, copolymer, poly(propylene fumarate), scaffold

Procedia PDF Downloads 145
303 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 131
302 Qualitative Analysis of Occupant’s Satisfaction in Green Buildings

Authors: S. Srinivas Rao, Pallavi Chitnis, Himanshu Prajapati

Abstract:

The green building movement in India commenced in 2003. Since then, more than 4,300 projects have adopted green building concepts. For last 15 years, the green building movement has grown strong across the country and has resulted in immense tangible and intangible benefits to the stakeholders. Several success stories have demonstrated the tangible benefit experienced in green buildings. However, extensive data interpretation and qualitative analysis are required to report the intangible benefits in green buildings. The emphasis is now shifting to the concept of people-centric design and productivity, health and wellbeing of occupants are gaining importance. This research was part of World Green Building Council’s initiative on 'Better Places for People' which aims to create a world where buildings support healthier and happier lives. The overarching objective of this study was to understand the perception of users living and working in green buildings. The study was conducted in twenty-five IGBC certified green buildings across India, and a comprehensive questionnaire was designed to capture occupant’s perception and experience in the built environment. The entire research focussed on the eight attributes of healthy buildings. The factors considered for the study include thermal comfort, visual comfort, acoustic comfort, ergonomics, greenery, fitness, green transit and sanitation and hygiene. The occupant’s perception and experience were analysed to understand their satisfaction level. The macro level findings of the study indicate that green buildings have addressed attributes of healthy buildings to a larger extent. Few important findings of the study focussed on the parameters such as visual comfort, fitness, greenery, etc. The study indicated that occupants give tremendous importance to the attributes such as visual comfort, daylight, fitness, greenery, etc. 89% occupants were comfortable with the visual environment, on account of various lighting element incorporated as part of the design. Tremendous importance to fitness related activities is highlighted by the study. 84% occupants had actively utilised sports and meditation facilities provided in their facility. Further, 88% occupants had access to the ample greenery and felt connected to the natural biodiversity. This study aims to focus on the immense advantages gained by users occupying green buildings. This will empower green building movement to achieve new avenues to design and construct healthy buildings. The study will also support towards implementing human-centric measures and in turn, will go a long way in addressing people welfare and wellbeing in the built environment.

Keywords: health and wellbeing, green buildings, Indian green building council, occupant’s satisfaction

Procedia PDF Downloads 161
301 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 96
300 Artificial Intelligence and Governance in Relevance to Satellites in Space

Authors: Anwesha Pathak

Abstract:

With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.

Keywords: satellite, space debris, traffic, threats, cyber security.

Procedia PDF Downloads 43
299 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 107
298 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 367
297 Influence of Footing Offset over Stability of Geosynthetic Reinforced Soil Abutments with Variable Facing under Lateral Excitation

Authors: Ashutosh Verma, Satyendra MIttal

Abstract:

The loss of strength at the facing-reinforcement interface brought on by the seasonal thermal expansion/contraction of the bridge deck has been responsible for several geosynthetic reinforced soil abutment failures over the years. This results in excessive settlement below the bridge seat, which results in bridge bumps along the approach road and shortens abutment's design life. There are surely a wide variety of facing configurations available to designers when choosing the sort of facade. These layouts can generally be categorised into three groups: continuous, full height rigid (FHR) and modular (panels/block). The current work aims to experimentally explore the behavior of these three facing categories using 1g physical model testing under serviceable cyclic lateral displacements. With configurable facing arrangements to represent these three facing categories, a field instrumented GRS abutment prototype was modelled into a N scaled down 1g physical model (N = 5) to reproduce field behavior. Peak earth pressure coefficient (K) on the facing and vertical settlement of the footing (s/B) for footing offset (x/H) as 0.1, 0.2, 0.3, 0.4 and 0.5 at 100 cycles have been measured for cyclic lateral displacement of top of facing at loading rate of 1mm/min. Three types of cyclic displacements have been carried out to replicate active condition (CA), passive condition (CP), and active-passive condition (CAP) for each footing offset. The results demonstrated that a significant decrease in the earth pressure over the facing occurs when footing offset increases. It is worth noticing that the highest rate of increment in earth pressure and footing settlement were observed for each facing configuration at the nearest footing offset. Interestingly, for the farthest footing offset, similar responses of each facing type were observed, which indicates that the upon reaching a critical offset point presumably beyond the active region in the backfill, the lateral responses become independent of the stresses from the external footing load. Evidently, the footing load complements the stresses developed due to lateral excitation resulting in significant footing settlements for nearer footing offsets. The modular facing proved inefficient in resisting footing settlement due to significant buckling along the depth of facing. Instead of relative displacement along the depth of facing, continuous facing rotates around the base when it fails, especially for nearer footing offset causing significant depressions in the backfill area surrounding the footing. FHR facing, on the other hand, have been successful in confining the stresses in the soil domain itself reducing the footing settlement. It may be suitably concluded that increasing the footing offset may render stability to the GRS abutment with any facing configuration even for higher cycles of excitation.

Keywords: GRS abutments, 1g physical model, footing offset, cyclic lateral displacement

Procedia PDF Downloads 60
296 Applicability of Polyisobutylene-Based Polyurethane Structures in Biomedical Disciplines: Some Calcification and Protein Adsorption Studies

Authors: Nihan Nugay, Nur Cicek Kekec, Kalman Toth, Turgut Nugay, Joseph P. Kennedy

Abstract:

In recent years, polyurethane structures are paving the way for elastomer usage in biology, human medicine, and biomedical application areas. Polyurethanes having a combination of high oxidative and hydrolytic stability and excellent mechanical properties are focused due to enhancing the usage of PUs especially for implantable medical device application such as cardiac-assist. Currently, unique polyurethanes consisting of polyisobutylenes as soft segments and conventional hard segments, named as PIB-based PUs, are developed with precise NCO/OH stoichiometry (∽1.05) for obtaining PIB-based PUs with enhanced properties (i.e., tensile stress increased from ∽11 to ∽26 MPa and elongation from ∽350 to ∽500%). Static and dynamic mechanical properties were optimized by examining stress-strain graphs, self-organization and crystallinity (XRD) traces, rheological (DMA, creep) profiles and thermal (TGA, DSC) responses. Annealing procedure was applied for PIB-based PUs. Annealed PIB-based PU shows ∽26 MPa tensile strength, ∽500% elongation, and ∽77 Microshore hardness with excellent hydrolytic and oxidative stability. The surface characters of them were examined with AFM and contact angle measurements. Annealed PIB-based PU exhibits the higher segregation of individual segments and surface hydrophobicity thus annealing significantly enhances hydrolytic and oxidative stability by shielding carbamate bonds by inert PIB chains. According to improved surface and microstructure characters, greater efforts are focused on analyzing protein adsorption and calcification profiles. In biomedical applications especially for cardiological implantations, protein adsorption inclination on polymeric heart valves is undesirable hence protein adsorption from blood serum is followed by platelet adhesion and subsequent thrombus formation. The protein adsorption character of PIB-based PU examines by applying Bradford assay in fibrinogen and bovine serum albumin solutions. Like protein adsorption, calcium deposition on heart valves is very harmful because vascular calcification has been proposed activation of osteogenic mechanism in the vascular wall, loss of inhibitory factors, enhance bone turnover and irregularities in mineral metabolism. The calcium deposition on films are characterized by incubating samples in simulated body fluid solution and examining SEM images and XPS profiles. PIB-based PUs are significantly more resistant to hydrolytic-oxidative degradation, protein adsorption and calcium deposition than ElastEonTM E2A, a commercially available PDMS-based PU, widely used for biomedical applications.

Keywords: biomedical application, calcification, polyisobutylene, polyurethane, protein adsorption

Procedia PDF Downloads 234